Tag:
Branch:
Tree:
998805a4c7
lts_2018_06_20
lts_2018_12_18
lts_2019_08_08
lts_2020_02_25
lts_2020_09_23
lts_2021_03_24
lts_2021_11_02
lts_2022_06_23
master
20180600
20181200
20181200.1
20190808
20190808.1
20200225
20200225.1
20200225.2
20200225.3
20200923
20200923.1
20200923.2
20200923.3
20210324.0
20210324.1
20210324.2
20210324.rc1
20211102.0
20211102.rc2
20220623.0
20220623.1
20220623.rc1
${ noResults }
5 Commits (998805a4c79d5d7a771f7e5a8ee3cbbbcba04f94)
Author | SHA1 | Message | Date |
---|---|---|---|
Abseil Team | a9a4956020 |
Export of internal Abseil changes
-- c68f1886f5e8fd90eb0c2d2e68feaf00a7cdacda by CJ Johnson <johnsoncj@google.com>: Introduce absl::Cleanup to the OSS repo PiperOrigin-RevId: 354583156 -- 17030cf388e10f7eb959e3e566326d1072ce392e by Abseil Team <absl-team@google.com>: Internal change only PiperOrigin-RevId: 354574953 -- e979d7236d4f3252e79ddda6739b67a9a326bf6d by CJ Johnson <johnsoncj@google.com>: Internal change PiperOrigin-RevId: 354545297 -- 7ea02b3783f7f49ef97d86a8f6580a19cc57df14 by Abseil Team <absl-team@google.com>: Pre-allocate memory for vectors where the size is known. PiperOrigin-RevId: 354344576 -- 9246c7cb11f1d6444f79ebe25acc69a8a9b870e0 by Matt Kulukundis <kfm@google.com>: Add support for Elbrus 2000 (e2k) Import of https://github.com/abseil/abseil-cpp/pull/889 PiperOrigin-RevId: 354344013 -- 0fc93d359cc1fb307552e917b37b7b2e7eed822f by Abseil Team <absl-team@google.com>: Integrate CordRepRing logic into cord (but do not enable it) PiperOrigin-RevId: 354312238 -- eda05622f7da71466723acb33403f783529df24b by Abseil Team <absl-team@google.com>: Protect ignore diagnostic with "__has_warning". PiperOrigin-RevId: 354112334 -- 47716c5d8fb10efa4fdd801d28bac414c6f8ec32 by Abseil Team <absl-team@google.com>: Rearrange InlinedVector copy constructor and destructor to treat a few special cases inline and then tail-call a non-inlined routine for the rest. In particular, we optimize for empty vectors in both cases. Added a couple of benchmarks that copy either an InlVec<int64> or an InlVec<InlVec<int64>>. Speed difference: ``` BM_CopyTrivial/0 0.92ns +- 0% 0.47ns +- 0% -48.91% (p=0.000 n=11+12) BM_CopyTrivial/1 0.92ns +- 0% 1.15ns +- 0% +25.00% (p=0.000 n=10+9) BM_CopyTrivial/8 8.57ns +- 0% 10.72ns +- 1% +25.16% (p=0.000 n=10+12) BM_CopyNonTrivial/0 3.21ns +- 0% 0.70ns +- 0% -78.23% (p=0.000 n=12+10) BM_CopyNonTrivial/1 5.88ns +- 1% 5.51ns +- 0% -6.28% (p=0.000 n=10+8) BM_CopyNonTrivial/8 21.5ns +- 1% 15.2ns +- 2% -29.23% (p=0.000 n=12+12) ``` Note: the slowdowns are a few cycles which is expected given the procedure call added in that case. We decided this is a good tradeoff given the code size reductions and the more significant speedups for empty vectors. Size difference (as measured by nm): ``` BM_CopyTrivial from 1048 bytes to 326 bytes. BM_CopyNonTrivial from 749 bytes to 470 bytes. ``` Code size for a large binary drops by ~500KB (from 349415719 to 348906015 348906191). All of the benchmarks that showed a significant difference: Ones that improve with this CL: ``` BM_CopyNonTrivial/0 3.21ns +- 0% 0.70ns +- 0% -78.23% (p=0.000 n=12+10) BM_InlinedVectorFillString/0 0.93ns +- 0% 0.24ns +- 0% -74.19% (p=0.000 n=12+10) BM_InlinedVectorAssignments/1 10.5ns +- 0% 4.1ns +- 0% -60.64% (p=0.000 n=11+10) BM_InlinedVectorAssignments/2 10.7ns +- 0% 4.4ns +- 0% -59.08% (p=0.000 n=11+11) BM_CopyTrivial/0 0.92ns +- 0% 0.47ns +- 0% -48.91% (p=0.000 n=11+12) BM_CopyNonTrivial/8 21.5ns +- 1% 15.2ns +- 2% -29.23% (p=0.000 n=12+12) BM_StdVectorEmpty 0.47ns +- 1% 0.35ns +- 0% -24.73% (p=0.000 n=12+12) BM_StdVectorSize 0.46ns +- 2% 0.35ns +- 0% -24.32% (p=0.000 n=12+12) BM_SwapElements<LargeCopyableOnly>/0 3.44ns +- 0% 2.76ns +- 1% -19.83% (p=0.000 n=11+11) BM_InlinedVectorFillRange/256 20.7ns +- 1% 17.8ns +- 0% -14.08% (p=0.000 n=12+9) BM_CopyNonTrivial/1 5.88ns +- 1% 5.51ns +- 0% -6.28% (p=0.000 n=10+8) BM_SwapElements<LargeCopyableMovable>/1 4.19ns +- 0% 3.95ns +- 1% -5.63% (p=0.000 n=11+12) BM_SwapElements<LargeCopyableMovableSwappable>/1 4.18ns +- 0% 3.99ns +- 0% -4.70% (p=0.000 n=9+11) BM_SwapElements<LargeCopyableMovable>/0 2.41ns +- 0% 2.31ns +- 0% -4.45% (p=0.000 n=12+12) BM_InlinedVectorFillRange/64 8.25ns +- 0% 8.04ns +- 0% -2.51% (p=0.000 n=12+11) BM_SwapElements<LargeCopyableOnly>/1 82.4ns +- 0% 81.5ns +- 0% -1.06% (p=0.000 n=12+12) ``` Ones that get worse with this CL: ``` BM_CopyTrivial/1 0.92ns +- 0% 1.15ns +- 0% +25.00% (p=0.000 n=10+9) BM_CopyTrivial/8 8.57ns +- 0% 10.72ns +- 1% +25.16% (p=0.000 n=10+12) BM_SwapElements<LargeCopyableMovableSwappable>/512 1.48ns +- 1% 1.66ns +- 1% +11.88% (p=0.000 n=12+12) BM_InlinedVectorFillString/1 11.5ns +- 0% 12.8ns +- 1% +11.62% (p=0.000 n=12+11) BM_SwapElements<LargeCopyableMovableSwappable>/64 1.48ns +- 2% 1.66ns +- 1% +11.66% (p=0.000 n=12+11) BM_SwapElements<LargeCopyableMovableSwappable>/1k 1.48ns +- 1% 1.65ns +- 2% +11.32% (p=0.000 n=12+12) BM_SwapElements<LargeCopyableMovable>/512 1.48ns +- 2% 1.58ns +- 4% +6.62% (p=0.000 n=11+12) BM_SwapElements<LargeCopyableMovable>/1k 1.49ns +- 2% 1.58ns +- 3% +6.05% (p=0.000 n=12+12) BM_SwapElements<LargeCopyableMovable>/64 1.48ns +- 2% 1.57ns +- 4% +6.04% (p=0.000 n=11+12) BM_InlinedVectorFillRange/1 4.81ns +- 0% 5.05ns +- 0% +4.83% (p=0.000 n=11+11) BM_InlinedVectorFillString/8 79.4ns +- 1% 83.1ns +- 1% +4.64% (p=0.000 n=10+12) BM_StdVectorFillString/1 16.3ns +- 0% 16.6ns +- 0% +2.13% (p=0.000 n=11+8) ``` PiperOrigin-RevId: 353906786 -- 8e26518b3cec9c598e5e9573c46c3bd1b03a67ef by Abseil Team <absl-team@google.com>: Internal change PiperOrigin-RevId: 353737330 -- f206ae0983e58c9904ed8b8f05f9caf564a446be by Matt Kulukundis <kfm@google.com>: Import of CCTZ from GitHub. PiperOrigin-RevId: 353682256 GitOrigin-RevId: c68f1886f5e8fd90eb0c2d2e68feaf00a7cdacda Change-Id: I5790c1036c4f543c701d1039848fabf7ae881ad8 |
4 years ago |
Abseil Team | 4fd9a1ec50 |
Export of internal Abseil changes
-- 03700706d80f0939e2b5b8c02a326f045b643730 by Abseil Team <absl-team@google.com>: Reduced latency and code-size of some InlinedVector methods: 1. Simpler fast path for push_back/emplace_back. 2. Do not inline slow path of push-back/emplace_back. 3. Simplify resize implementation. Performance: A simple benchmark that does the following per iteration: ``` push_back on an InlinedVector<int64> push_back on an InlinedVector<bool> ``` Sees iteration time go from 4.3ns to 2.8ns and code size shrink from 1129 bytes to 175 bytes. PiperOrigin-RevId: 343335635 -- 16f74277a9e8bf228c164b053da8b8098f76de62 by Derek Mauro <dmauro@google.com>: Internal change PiperOrigin-RevId: 343332753 -- 886b6d5d0244783d309e34f03c21710f411e3cb3 by Abseil Team <absl-team@google.com>: Optimize `Status::Status`: When creating a status, we currently create an empty struct first, then assign fields. This is suboptimal: https://screenshot.googleplex.com/5HqDuFBKUEqrVgy. Relevant Benchmarks: ``` BM_StatusCopyError_Deep/threads:1 26.9ns ±13% 21.2ns ±16% -21.46% (p=0.000 n=15+15) BM_StatusCopyError_Deep/threads:2 32.0ns ±30% 25.6ns ±37% -20.17% (p=0.004 n=15+14) BM_StatusCopyError_Deep/threads:4 37.4ns ±84% 30.6ns ±58% -18.26% (p=0.029 n=15+15) BM_StatusCopyError_Deep/threads:8 47.2ns ±33% 33.5ns ±56% -28.91% (p=0.000 n=15+14) ``` PiperOrigin-RevId: 343303312 -- 2f9d945654292e8e52cad410fa41dae794cff42c by Abseil Team <absl-team@google.com>: Set SOVERSION for the installed libraries PiperOrigin-RevId: 343287682 -- 600bbfffe91cfbdc60b43cdad5619258298d0b0d by Abseil Team <absl-team@google.com>: Fix a typo in a comment (than -> that) PiperOrigin-RevId: 343187724 -- 310c82cd97b3f1f0d1ee93a0ee2b0aee828b2a93 by Abseil Team <absl-team@google.com>: Simplify unaligned memory access functions. The #ifdef to produce calls to __sanitizer_unaligned_load16 etc were needed in past versions of this code, when we were lying to the compiler about the alignment of the loads/stores, by using a reinterpret_cast. However, a year ago, absl switched to simply use memcpy. Sanitizers support this correctly by default, nothing extra is required. PiperOrigin-RevId: 343159883 -- bdf6fcf99180c371fda6ba8af82fd44656e372fa by Gennadiy Rozental <rogeeff@google.com>: Migrate usage flags to global variables instead of modeling them as Abseil Flags. Also introduce new semantic for --help=substring command line argument. PiperOrigin-RevId: 343019883 GitOrigin-RevId: 03700706d80f0939e2b5b8c02a326f045b643730 Change-Id: I4ad40dfa9606f8b8bfb2d91fd09e327105311bfb |
4 years ago |
Abseil Team | 0e9921b75a |
Export of internal Abseil changes
-- 23500704dd7c2642fad49f88c07ce41ebaab12e4 by Abseil Team <absl-team@google.com>: Change fetch_add() to store(1 + load()) As there is only one concurrent writer to the Info struct, we can avoid using the more expensive fetch_add() function. PiperOrigin-RevId: 329523785 -- 79e5018dba2e117ad89b76367165a604b3f24045 by Abseil Team <absl-team@google.com>: Record rehash count in hashtablez This will help identify which containers could benefit from a reserve call. PiperOrigin-RevId: 329510552 -- e327e54b805d67556f934fa7f7dc2d4e72fa066a by Abseil Team <absl-team@google.com>: Fix -Wsign-compare issues. These lines could theoretically have overflowed in cases of very large stack traces etc. Very unlikely, but this was causing warnings when built with -Wsign-compare, which we intend to enable with Chromium. In none of these three cases is it trivial to switch to a range-based for loop. PiperOrigin-RevId: 329415195 -- 08aca2fc75e8b3ad1201849987b64148fe48f283 by Xiaoyi Zhang <zhangxy@google.com>: Release absl::StatusOr. PiperOrigin-RevId: 329353348 -- bf4d2a7f8b089e2adf14d32b0e39de0a981005c3 by Xiaoyi Zhang <zhangxy@google.com>: Internal change PiperOrigin-RevId: 329337031 -- 42fa7d2fb993bbfc344954227cf1eeb801eca065 by Abseil Team <absl-team@google.com>: Internal change PiperOrigin-RevId: 329099807 GitOrigin-RevId: 23500704dd7c2642fad49f88c07ce41ebaab12e4 Change-Id: I6713e4ca3bb0ab2ced5e487827ae036ab8ac61f1 |
4 years ago |
Abseil Team | e7ebf98037 |
Export of internal Abseil changes
-- 44b312da54263fc7491b5f1e115e49e0c1e2dc10 by Greg Falcon <gfalcon@google.com>: Import of CCTZ from GitHub. PiperOrigin-RevId: 315782632 -- 27618a3b195f75384ba44e9712ae0b0b7d85937e by Greg Falcon <gfalcon@google.com>: Update Abseil's internal Invoke() implementation to follow C++17 semantics. Starting in C++17, when invoke'ing a pointer-to-member, if the object representing the class is a reference_wrapper, that wrapper is unpacked. Because we implement a number of functional APIs that closely match C++ standard proposals, it is better if we follow the standard's notion of what "invoking" means. This also makes `absl::base_internal::Invoke()` match `std::invoke()` in C++17. I intend to make this an alias in a follow-up CL. PiperOrigin-RevId: 315750659 -- 059519ea402cd55b1b716403bb680504c6ff5808 by Xiaoyi Zhang <zhangxy@google.com>: Internal change PiperOrigin-RevId: 315597064 -- 5e2042c8520576b2508e2bfb1020a97c7db591da by Titus Winters <titus@google.com>: Update notes on the delta between absl::Span vs. std::span. PiperOrigin-RevId: 315518942 -- 9d3875527b93124f5de5d6a1d575c42199fbf323 by Abseil Team <absl-team@google.com>: Internal cleanup PiperOrigin-RevId: 315497633 GitOrigin-RevId: 44b312da54263fc7491b5f1e115e49e0c1e2dc10 Change-Id: I24573f317c8388bd693c0fdab395a7dd419b33b0 |
5 years ago |
Abseil Team | 768eb2ca28 |
Export of internal Abseil changes
-- f012012ef78234a6a4585321b67d7b7c92ebc266 by Laramie Leavitt <lar@google.com>: Slight restructuring of absl/random/internal randen implementation. Convert round-keys.inc into randen_round_keys.cc file. Consistently use a 128-bit pointer type for internal method parameters. This allows simpler pointer arithmetic in C++ & permits removal of some constants and casts. Remove some redundancy in comments & constexpr variables. Specifically, all references to Randen algorithm parameters use RandenTraits; duplication in RandenSlow removed. PiperOrigin-RevId: 312190313 -- dc8b42e054046741e9ed65335bfdface997c6063 by Abseil Team <absl-team@google.com>: Internal change. PiperOrigin-RevId: 312167304 -- f13d248fafaf206492c1362c3574031aea3abaf7 by Matthew Brown <matthewbr@google.com>: Cleanup StrFormat extensions a little. PiperOrigin-RevId: 312166336 -- 9d9117589667afe2332bb7ad42bc967ca7c54502 by Derek Mauro <dmauro@google.com>: Internal change PiperOrigin-RevId: 312105213 -- 9a12b9b3aa0e59b8ee6cf9408ed0029045543a9b by Abseil Team <absl-team@google.com>: Complete IGNORE_TYPE macro renaming. PiperOrigin-RevId: 311999699 -- 64756f20d61021d999bd0d4c15e9ad3857382f57 by Gennadiy Rozental <rogeeff@google.com>: Switch to fixed bytes specific default value. This fixes the Abseil Flags for big endian platforms. PiperOrigin-RevId: 311844448 -- bdbe6b5b29791dbc3816ada1828458b3010ff1e9 by Laramie Leavitt <lar@google.com>: Change many distribution tests to use pcg_engine as a deterministic source of entropy. It's reasonable to test that the BitGen itself has good entropy, however when testing the cross product of all random distributions x all the architecture variations x all submitted changes results in a large number of tests. In order to account for these failures while still using good entropy requires that our allowed sigma need to account for all of these independent tests. Our current sigma values are too restrictive, and we see a lot of failures, so we have to either relax the sigma values or convert some of the statistical tests to use deterministic values. This changelist does the latter. PiperOrigin-RevId: 311840096 GitOrigin-RevId: f012012ef78234a6a4585321b67d7b7c92ebc266 Change-Id: Ic84886f38ff30d7d72c126e9b63c9a61eb729a1a |
5 years ago |