Hive and Presto jobs can export data to AWS S3 but the destination prefix for the file is hardcoded in the result export configuration.
Some users want to programmatically export data from TD to S3 in 'chunks' and would like the ability to setup a scheduled job that export to a destination file in S3 whose name is defined by a variable, such as the job ID, the execution date/time, or other unique identifier.20 votes
We are looking into feasibility.
Many companies using IDaaS such as onelogin for SSO (single sign on) management.
It would be great if Treasure Data supported SAML(Security Assertion Markup Language) and enabled onelogin integration.17 votes
Single sign on options are in our plans for after the release of our new Console.
When we start to test Result Export to any targets in the beginning, we often failed to output data to the target because of misconfiguration.
In this case, we need to retry the queries again and again.
It needs a lot of time.
Therefore, it is useful if we could retry only Result Export part without hive and presto query after changing Result Export configuration.12 votes
It would be VERY convenient if there were some hotkeys for common actions in the console. In particular, having a way to run a query would be extremely helpful (for instance, ctrl/cmd + Enter) so that I don't have to click "Run" every time.12 votes
We started rolling out the alpha version of new Web console. If you’re interested in, please send an email to firstname.lastname@example.org to have an access for alpha testing. > https://docs.treasuredata.com/articles/releasenote-20160301
It is useful if we can get notification by Slack when long running job are finished.
Some hook integration with other services like Slack or AWS Lamda give us more flexibility for business workflow.12 votes
Option of changing character set and return code(windows/mac/unix) for result download from browser.
This reduces overheads for data integration in complex systems.
SJIS / CRLF is the request from Japanese customer.11 votes
DataConnector support for google spreadsheet to get data would be useful.
We often use google spreadsheet for manage master data & want to import data by using replace mode everyday.10 votes
It would be excellent if I could declare variables somehow. I don't know how this would be implemented, but I use Presto so much for repeated queries that it would be nice to be able to just change variables at the top instead of manually changing values (for instance, dates that are used many times throughout the query) by hand.10 votes
We are reviewing technical feasibility & options for supporting variables / parameters in Presto.
This should be pretty self explanatory. Currently TD supports only one timestamp column, while many tables will have more than one timestamp field. It's clumsy to have to store timestamps as strings, and then perform conversion each time the field is used.10 votes
Currently exploring feasibility of multiple timestamp columns.
If Hive or Presto query run for a unexpected long time, we would like to kill it automatically.
So, it would be nice if we can set timeout as query option.
When we set 30 min as timeout, Hive/Presto query is killed automatically if it's running over 30 mins.8 votes
It would be nice to support like plpythonu not only SQL based UDF.
Of course, I don't care If it wouldn't support file I/O and network access.7 votes
Thanks for the suggestion! This is something we are actively discussing internally.
TreasureData's result output doesn't have parameterization for path and file name and table name.
But, a lot of data store needs it, because the output always replace the file.
In Result output to BigQuery, it also needs this type of parameterization.
Because, BigQuery require table partitioning by date to reduce query scanning.
We want Result Output to Bigquery to set parameter tablename_%Y%m%d. Ex. log_201606186 votes
It would be great if we chould have an additional key that refers to "for what is this job" in order to identfy jobs.
As of now, it is difficult to pick up the right job by any keys. It would be problematic when monitoring.6 votes
We need to manage saved query version. Saved query management function includes comparing modified query with previous one.
We can't perceives any changing saved query and don't know which query version was run in a job. We'd like to know a person who changes query and modified points on the query.6 votes
Query Result Preview showing 100 rows is very few.
It's useful to have Paging function as BigQuery console because most of users frequently check the first row and last row of the query result.6 votes
We are redesigning our console from the ground up. We will keep this suggestion in mind while doing so.
Hide default table; "sample_datasets".
We don't need it.6 votes
Thanks for the request. We will allow account admins to delete the sample_datasets database as a normal database soon.
Termination Protection for an database is useful like AWS‘s Enabling Termination Protection for an Instance
We sometimes missed to delete own database.
If the database is important for our business, it would be very huge problem.5 votes
The Toolbelt (td CLI) for Linux is bundled as part of the td-agent Linux distribution package (RPM and DEB packages).
However the Toolbelt is updated more often than td-agent and sometimes we end up using an older version of the Toolbelt, with obvious drawbacks and problems. A workaround exists to update the Toolbelt version within a td-agent distribution package but it's a manual process.
It would great to have an independent Linux distribution package for the Toolbelt (RPM and DEB).5 votes
Time data in TD is output in string format unless a complex process is followed to change the output into time format.
External softwares like Tableau require output in time format in order to process the required column as a time.5 votes
From the historical reason (Hive didn’t support timestamp type) TD doesn’t have timestamp type, but now all query engines have. We have a plan to support timestamp type not only for time column, but more in generic way including data connectors / result outputs. We’ll update you once we have clear timeline.
- Don't see your idea?