Treasure Data

Welcome to our feedback forum. Let us know how we can improve your experience with our product.

I think Treasure Data should...

(thinking…)

Enter your idea and we'll search to see if someone has already suggested it.

If a similar idea already exists, you can support and comment on it.

If it doesn't exist, you can post your idea so others can support it.

Enter your idea and we'll search to see if someone has already suggested it.

  1. Supporting "explain analyze" presto's command.

    Hello guys.
    It would be f*cking good to be able of use Presto 'EXPLAIN ANALYZE' command, to better understand where are the points to improve the queries.
    Here is the command:
    https://prestodb.io/docs/current/sql/explain-analyze.html

    1 vote
    Sign in
    Check!
    (thinking…)
    Reset
    or sign in with
    • facebook
    • google
      Password icon
      I agree to the terms of service
      Signed in as (Sign out)

      We’ll send you updates on this idea

    • show how to get API key

      How to get APi key ??

      1 vote
      Sign in
      Check!
      (thinking…)
      Reset
      or sign in with
      • facebook
      • google
        Password icon
        I agree to the terms of service
        Signed in as (Sign out)

        We’ll send you updates on this idea

      • Use hour-of-day (0-23) instead of hour-of-am-pm (1-12) in TD_SCHEDULED_TIME form.

        hour-of-am-pm (1-12) is difficult to understand because it's depends on cultural chasm, programming and so on.

        2 votes
        Sign in
        Check!
        (thinking…)
        Reset
        or sign in with
        • facebook
        • google
          Password icon
          I agree to the terms of service
          Signed in as (Sign out)

          We’ll send you updates on this idea

          0 comments  ·  Console  ·  Admin →
        • 1 vote
          Sign in
          Check!
          (thinking…)
          Reset
          or sign in with
          • facebook
          • google
            Password icon
            I agree to the terms of service
            Signed in as (Sign out)

            We’ll send you updates on this idea

            0 comments  ·  Inputs  ·  Admin →
          • Result output to Bigquery support parameterization for tablename

            TreasureData's result output doesn't have parameterization for path and file name and table name.
            But, a lot of data store needs it, because the output always replace the file.
            In Result output to BigQuery, it also needs this type of parameterization.
            Because, BigQuery require table partitioning by date to reduce query scanning.

            We want Result Output to Bigquery to set parameter tablename_%Y%m%d. Ex. log_20160618

            6 votes
            Sign in
            Check!
            (thinking…)
            Reset
            or sign in with
            • facebook
            • google
              Password icon
              I agree to the terms of service
              Signed in as (Sign out)

              We’ll send you updates on this idea

              1 comment  ·  Outputs  ·  Admin →
            • Support for an additional key like "Description" in job list hash

              It would be great if we chould have an additional key that refers to "for what is this job" in order to identfy jobs.

              As of now, it is difficult to pick up the right job by any keys. It would be problematic when monitoring.

              6 votes
              Sign in
              Check!
              (thinking…)
              Reset
              or sign in with
              • facebook
              • google
                Password icon
                I agree to the terms of service
                Signed in as (Sign out)

                We’ll send you updates on this idea

                0 comments  ·  API  ·  Admin →
              • Support Result Export for FTPS

                Support Result Export for FTPS

                2 votes
                Sign in
                Check!
                (thinking…)
                Reset
                or sign in with
                • facebook
                • google
                  Password icon
                  I agree to the terms of service
                  Signed in as (Sign out)

                  We’ll send you updates on this idea

                  0 comments  ·  Outputs  ·  Admin →
                • Improve Data Utilization breakdown

                  It would be very useful if the Data Utilization page could break the imports down by day and week (now it's only by month) or by type (i.e. streaming vs bulk).

                  1 vote
                  Sign in
                  Check!
                  (thinking…)
                  Reset
                  or sign in with
                  • facebook
                  • google
                    Password icon
                    I agree to the terms of service
                    Signed in as (Sign out)

                    We’ll send you updates on this idea

                    0 comments  ·  Console  ·  Admin →
                  • Allow Search by Job ID

                    It would be useful just to have a simple search bar that allows you to search for a specific Job ID or even to search for strings within queries.

                    2 votes
                    Sign in
                    Check!
                    (thinking…)
                    Reset
                    or sign in with
                    • facebook
                    • google
                      Password icon
                      I agree to the terms of service
                      Signed in as (Sign out)

                      We’ll send you updates on this idea

                      0 comments  ·  Console  ·  Admin →
                    • link queries to generated tables

                      save the link to the query that created a table with the table description so i can know where the table came from

                      1 vote
                      Sign in
                      Check!
                      (thinking…)
                      Reset
                      or sign in with
                      • facebook
                      • google
                        Password icon
                        I agree to the terms of service
                        Signed in as (Sign out)

                        We’ll send you updates on this idea

                        0 comments  ·  Console  ·  Admin →
                      • Export Valid JSON

                        I tried exporting JSON and just got back:
                        [field, field2, field3, field4]
                        [field, field2, field3, field4]
                        ...

                        Instead of something useful like:
                        {field: value, field2: value, field3: value}
                        {field: value, field2: value, field3: value}

                        Why would it export invalid line by line JSON? From the looks of it, it's basically CSV but with a [ prepended and a ] appended on each line and then a missing column header row. So it's impossible to really tell to which fields the values belong.

                        For now I've cracked open Google Refine and converted the CSV export to actual line by line JSON…

                        1 vote
                        Sign in
                        Check!
                        (thinking…)
                        Reset
                        or sign in with
                        • facebook
                        • google
                          Password icon
                          I agree to the terms of service
                          Signed in as (Sign out)

                          We’ll send you updates on this idea

                          3 comments  ·  Outputs  ·  Admin →
                        • Supports TD_TIME_BETWEEN or BETWEEN supports Time Index Pushdown

                          TD_TIME_RANGE function is important feature on TreasureData.
                          And, BETWEEN in RDB is only similar function.

                          But, there is difference between them.

                          - TD_TIME_RANGE supports begin <= time < end
                          - BETWEEN supports begin <= time <= end

                          Why we need it?
                          Because, some of us use a wrong usage due to this similar function.
                          For example,
                          TD_TIME_RANGE(time, '2016-01-01 00:00:00', '2016-01-01 23:59:59')

                          So, it would be nice if TD supports TD_TIME_BETWEEN which supports begin <= time <= end.
                          Or, BETWEEN function supports time index pushdown.

                          1 vote
                          Sign in
                          Check!
                          (thinking…)
                          Reset
                          or sign in with
                          • facebook
                          • google
                            Password icon
                            I agree to the terms of service
                            Signed in as (Sign out)

                            We’ll send you updates on this idea

                          • Supports TD_TIME_BEWEEN or BETWEEN supports Time Index Pushdown

                            TD_TIME_RANGE function is similar to BETWEEN function in RDB.
                            But, there is difference between them.

                            - TD_TIME_RANGE supports begin <= time < end
                            - BETWEEN supports begin <= time <= end

                            But, some of us uses it as wrong usage.
                            For example,
                            TD_TIME_RANGE(time, '2016-01-01 00:00:00', '2016-01-01 23:59:59')

                            The reason is there is no function in other databases.
                            So, it would be nice if TD UDF support TD_TIME_BETWEEN which is similar function to BETWEEN.
                            TD_TIME_BETWEEN supports begin <= time <= end.
                            Or it would be nice if BETWEEN function supports TIME INDEX PUSHDOWN.

                            0 votes
                            Sign in
                            Check!
                            (thinking…)
                            Reset
                            or sign in with
                            • facebook
                            • google
                              Password icon
                              I agree to the terms of service
                              Signed in as (Sign out)

                              We’ll send you updates on this idea

                            • Import failed import records or log to TD via SDKs

                              Problem:
                              When we ingest data into TreasureData via mobile SDKs, TD ingests only valid records.
                              But, if our imported data having wrong timestamp, TD rejects it without callback and notification.
                              So, we couldn't know whether the record is imported or not if the data has wrong timestamp.

                              Expected Results:
                              In our suggestion, TreasureData should return error callback to SDK or import the log to our table (like sdk_error table which is generated automatically) if the data was invalid.

                              1 vote
                              Sign in
                              Check!
                              (thinking…)
                              Reset
                              or sign in with
                              • facebook
                              • google
                                Password icon
                                I agree to the terms of service
                                Signed in as (Sign out)

                                We’ll send you updates on this idea

                              • Support embulk filter plugins for DataConnector

                                I wish you to support these plugins.

                                embulk-filter-column (0.4.0)
                                embulk-filter-query_string (0.1.2)
                                embulk-filter-row (0.2.0)
                                embulk-filter-split_column (0.1.0)
                                embulk-input-s3 (0.2.8)
                                embulk-output-td (0.3.2)
                                embulk-parser-apache-custom-log (0.4.0)
                                embulk-parser-none (0.2.0)
                                embulk-parser-query_string (0.3.1)
                                embulk-parser-regex (0.2.1)

                                because our log data(apache combined log format) at S3 contains unnecessary data (such as access to static files).

                                1 vote
                                Sign in
                                Check!
                                (thinking…)
                                Reset
                                or sign in with
                                • facebook
                                • google
                                  Password icon
                                  I agree to the terms of service
                                  Signed in as (Sign out)

                                  We’ll send you updates on this idea

                                  2 comments  ·  Inputs  ·  Admin →
                                • Allow partitioning on columns other than "time"

                                  Not all of the data we might want to load into Treasure Data is time series data. When we bulk load a data set that does not have a time column, the 'time' values are all pretty much the same. It would be nice to be able to select another column, perhaps a categorical one, to do the partitioning on. It is not clear that there would be significant query or load performance improvements (there might be!), but it is a data architecture/elegance issue that really bugs me.

                                  2 votes
                                  Sign in
                                  Check!
                                  (thinking…)
                                  Reset
                                  or sign in with
                                  • facebook
                                  • google
                                    Password icon
                                    I agree to the terms of service
                                    Signed in as (Sign out)

                                    We’ll send you updates on this idea

                                    0 comments  ·  Data Tanks  ·  Admin →
                                  • Support Result Output to Redis

                                    Hivemall is useful to create ML model, but it's hard to use it for real time prediction.
                                    In this case, we use RDB like PostgreSQL or KVS like Redis for it.
                                    If Treasure Data support Result output to Redis, it would be helpful for Machine Learning User.

                                    3 votes
                                    Sign in
                                    Check!
                                    (thinking…)
                                    Reset
                                    or sign in with
                                    • facebook
                                    • google
                                      Password icon
                                      I agree to the terms of service
                                      Signed in as (Sign out)

                                      We’ll send you updates on this idea

                                      0 comments  ·  Outputs  ·  Admin →
                                    • Automatic uuid generation

                                      We want a mechanism that could insert a uuid with every row that gets inserted.

                                      PostgreSQL could support the following sentences:

                                      ```
                                      pageview_id uuid primary key default uuid_generate_v4()
                                      ```

                                      Use case:

                                      We pull back a bunch of rows and do some processing on them.

                                      Later, We want to go back and reference the original rows with some other Treasure Data query. We can, of course, match a bunch of columns until we're assured of uniqueness, but it would simply be a lot easier if we could reference a unique id.

                                      vice versa:

                                      We might run a query that returns a…

                                      1 vote
                                      Sign in
                                      Check!
                                      (thinking…)
                                      Reset
                                      or sign in with
                                      • facebook
                                      • google
                                        Password icon
                                        I agree to the terms of service
                                        Signed in as (Sign out)

                                        We’ll send you updates on this idea

                                        0 comments  ·  Inputs  ·  Admin →
                                      • Non-constant expressions in hive array

                                        Support non-constant values in array indexes, in order to get the last value in an array. Currently this is supported in presto, but when doing a complex or heavy query hive is preferred.

                                        1 vote
                                        Sign in
                                        Check!
                                        (thinking…)
                                        Reset
                                        or sign in with
                                        • facebook
                                        • google
                                          Password icon
                                          I agree to the terms of service
                                          Signed in as (Sign out)

                                          We’ll send you updates on this idea

                                        • Have a pivot function like in Oracle

                                          Oracle has a pivot function which makes it easy to take values in a particular column and turn those values into columns. Here is a description of that function: http://www.techonthenet.com/oracle/pivot.php

                                          It would be amazing if we could have this functionality on TD too.

                                          4 votes
                                          Sign in
                                          Check!
                                          (thinking…)
                                          Reset
                                          or sign in with
                                          • facebook
                                          • google
                                            Password icon
                                            I agree to the terms of service
                                            Signed in as (Sign out)

                                            We’ll send you updates on this idea

                                          ← Previous 1 3 4 5 6 7
                                          • Don't see your idea?

                                          Treasure Data

                                          Feedback and Knowledge Base