This project is mirrored from https://github.com/cockroachdb/cockroach. Updated .
  1. 13 May, 2020 11 commits
  2. 12 May, 2020 7 commits
    • Matt Jibson's avatar
      Merge pull request #48723 from mjibson/backport19.2-47978 · 30099ec6
      Matt Jibson authored
      release-19.2: acceptance: build GSS test binary in docker image
      30099ec6
    • Yahor Yuzefovich's avatar
      colexec: fix performance inefficiency in materializer · 3c8285e3
      Yahor Yuzefovich authored
      We mistakenly were passing `sqlbase.DatumAlloc` by value, and not by
      pointer, and as a result we would always be allocating 16 datums but
      using only 1 - i.e. we were not only not pooling the allocations, but
      actually making a bunch of useless allocations as well.
      
      This inefficiency becomes noticeable when the vectorized query returns
      many rows and when we have wrapped processors and those processors get
      a lot of input rows - in all cases when we need to materialize a lot.
      For example, TPC-H query 16 sees about 10% improvement (it returns 18k
      rows) and TPC-DS query 6 sees 2x improvement (it has wrapped hash
      aggregator with a decimal column) with this fix.
      
      Release note (performance improvement): A performance inefficiency has
      been fixed in the vectorized execution engine which results in speed ups
      on all queries when run via the vectorized engine, with most noticeable
      gains on the queries that output many rows.
      3c8285e3
    • Matt Jibson's avatar
      acceptance: build GSS test binary in docker image · 4b58ffdd
      Matt Jibson authored
      Previously the curl/tar step was failing with errors I didn't know how to
      fix regarding SSL. Instead of trying to download Go, use a docker image
      and build the test binary in it. This should be much more successful.
      
      Fixes #47954
      
      Release note: None
      4b58ffdd
    • Oliver Tan's avatar
      Merge pull request #48702 from otan-cockroach/nightly_night · 0a52cc9d
      Oliver Tan authored
      lint: fix nightly lint build for release-19.2
      0a52cc9d
    • ajwerner's avatar
      Merge pull request #48715 from ajwerner/ajwerner/fix-test-flake-on-19.2 · e4596a1b
      ajwerner authored
      release-19.2: kvserver: cope with zero-value expiration in maxClosed
      e4596a1b
    • Andrew Werner's avatar
      release-19.2: kvserver: cope with zero-value expiration in maxClosed · 49d56803
      Andrew Werner authored
      It seems that sometimes the protoutil library will insert zero values
      during tests to stress the random nullability of fields. In practice
      it shouldn't happen, the lease.Expiration field should always be nil for
      epoch-based leases. For now we add code to deal with the randnullability
      which is disabled for later versions.
      
      Release note: None
      49d56803
    • Oliver Tan's avatar
      lint: fix nightly lint build for release-19.2 · b2fc0547
      Oliver Tan authored
      Seems broken by `408c5807`. Add the
      correct variables to make the nightly lint build work for release-19.2
      
      Release note: None
      b2fc0547
  3. 11 May, 2020 5 commits
  4. 10 May, 2020 2 commits
    • craig[bot]'s avatar
      Merge #48645 · 6b360351
      craig[bot] authored
      48645: release-19.2: sql: only include the number of non-null rows when building histograms r=rytaft a=rytaft
      
      Backport 1/3 commits from #48528.
      
      /cc @cockroachdb/release
      
      ---
      
      Histograms used by the optimizer are built by sampling at most 10,000
      rows, splitting those rows into 200 buckets, and then scaling up the
      counts of each bucket based on the total number of rows in the table.
      
      Prior to this commit, null values in the sampled column were excluded
      from the sampled values but included in the row count used to scale
      up the bucket counts. This could result in inaccurate histograms when
      there were many nulls.
      
      This commit fixes the problem by excluding nulls from the row count used
      to scale up the histogram bucket counts.
      
      Release note (performance improvement): Histograms used by the optimizer
      for query planning now have more accurate row counts per histogram bucket,
      particularly for columns that have many null values. This results in
      better plans in some cases.
      Co-authored-by: default avatarRebecca Taft <[email protected]>
      6b360351
    • Rebecca Taft's avatar
      Merge pull request #48625 from rytaft/backport19.2-48580 · 7437e7ad
      Rebecca Taft authored
      release-19.2: opt: fix bug in histogram filtering code for uuid and string predicates
      7437e7ad
  5. 09 May, 2020 3 commits
    • Rebecca Taft's avatar
      sql: only include the number of non-null rows when building histograms · b19a3dee
      Rebecca Taft authored
      Histograms used by the optimizer are built by sampling at most 10,000
      rows, splitting those rows into 200 buckets, and then scaling up the
      counts of each bucket based on the total number of rows in the table.
      
      Prior to this commit, null values in the sampled column were excluded
      from the sampled values but included in the row count used to scale
      up the bucket counts. This could result in inaccurate histograms when
      there were many nulls.
      
      This commit fixes the problem by excluding nulls from the row count used
      to scale up the histogram bucket counts.
      
      Release note (performance improvement): Histograms used by the optimizer
      for query planning now have more accurate row counts per histogram bucket,
      particularly for columns that have many null values. This results in
      better plans in some cases.
      b19a3dee
    • Jordan Lewis's avatar
      sql: return the proper type length for "char" type · da65268d
      Jordan Lewis authored
      Previously, the "char" type was reported as a varlen datatype. However,
      it's actually a fixed-length datatype with size 1. This commit changes
      the reporting to be accurate.
      
      Release note (sql change): correctly report type length for "char" type
      da65268d
    • Rebecca Taft's avatar
      opt: fix bug in histogram filtering code for uuid and string predicates · d6665580
      Rebecca Taft authored
      Prior to this commit, an equality constraint on some types such as uuids
      and strings could result in inaccurate stats when using a histogram. This
      was happening when the literal fell within a histogram bucket, and not at
      the upper bound. Since in general it is not possible to estimate the number
      of values within an arbitrary range for these types, we always guessed that
      half of the values in the bucket were included. However, this can be wildly
      inaccurate when the bucket contains many distinct values.
      
      This commit fixes the problem by adding a special case for equality
      conditions. Instead of estimating the selectivity to be 1/2 of the bucket,
      it estimates it as 1/distinct_count of the bucket.
      
      Release note (performance improvement): Fixed a bug in the histogram filtering
      logic in the optimizer which was causing inaccurate cardinality estimates
      for queries with equality predicates on UUIDs and strings, as well as
      some other types. This bug has existed since histograms were first introduced
      into the optimizer in the 19.2.0 release. Fixing it improves the optimizer's
      cardinality estimates and results in better query plans in some cases.
      d6665580
  6. 08 May, 2020 2 commits
    • Andrew Werner's avatar
      sql,importccl: prevent DROP DATABASE ... CASCADE if there are offline tables · 9ac9a134
      Andrew Werner authored
      See the issue for more commentary on the problem. In short, we never dropped
      offline tables during `DROP DATABASE ... CASCADE` which would end up leaving
      those tables completely orphaned. Orphaned tables with no parent are a problem.
      
      Perhaps it would be better to stop the relevant jobs and then clean up after
      them but that's a much more involved fix.
      
      Fixes #48589.
      
      Release note (bug fix): Prevent dropping of databases which contain tables
      which are currently offline due to `IMPORT` or `RESTORE`. Previously dropping
      a database in this state could lead to a corrupted schema which prevented
      running backups.
      9ac9a134
    • RaduBerinde's avatar
      Merge pull request #48545 from RaduBerinde/backport19.2-48514 · 603c3529
      RaduBerinde authored
      release-19.2: opt: fix string with spaces in SHOW STATISTICS USING JSON
      603c3529
  7. 07 May, 2020 6 commits
  8. 06 May, 2020 4 commits