Export of internal Abseil changes

--
07240ca7822d007cdcc79f2c40bd58b2c2010348 by Abseil Team <absl-team@google.com>:

Correct the comment from "AlphaNum" to "Arg".

PiperOrigin-RevId: 416139192

--
adcba4a6b3763626e1db7b1e8c108b3114903557 by Martijn Vels <mvels@google.com>:

Fix NewExternalRep() to require data being non-empty, and remove nullptr return.

PiperOrigin-RevId: 416135865

--
c0d14cd918fb16f15d1d84de9284b5c5ecc1f8f2 by Abseil Team <absl-team@google.com>:

Fix doc comment for absl::ascii_isprint().

The comment was incorrectly saying that it includes all whitespace.
It doesn't; the only whitespace char it includes is ' '.

PiperOrigin-RevId: 416112524

--
d83327800159c07002b6865e21232a12463e02dd by Abseil Team <absl-team@google.com>:

Internal change

PiperOrigin-RevId: 416099978

--
baf11e9ca42ca9140cdbf8075f971db8d65b1195 by Ilya Tokar <tokarip@google.com>:

Prevent compiler from optimizing Group_Match* benchmarks away.
Currently we benchmark single store of precomputed value.

Not all affected benchmarks show performance changes:

BM_Group_Match                                          0.53ns ± 1%  0.53ns ± 0%   -0.42%  (p=0.038 n=10+10)
BM_Group_MatchEmpty                                     0.26ns ± 1%  0.26ns ± 1%     ~     (p=1.000 n=10+10)
BM_Group_MatchEmptyOrDeleted                            0.26ns ± 1%  0.26ns ± 1%     ~     (p=0.121 n=10+10)
BM_Group_CountLeadingEmptyOrDeleted                     0.26ns ± 1%  0.45ns ± 0%  +70.05%   (p=0.000 n=10+8)
BM_Group_MatchFirstEmptyOrDeleted                       0.26ns ± 0%  0.44ns ± 1%  +65.91%    (p=0.000 n=8+9)

But inspecting the generated code shows the difference,
e. g. BM_Group_MatchFirstEmptyOrDeleted

Before:
add  $0xffffffffffffffff,%rbx
jne  30

After:
pcmpeqd  %xmm0,%xmm0
pcmpgtb  -0x30(%rbp),%xmm0
pmovmskb %xmm0,%eax
add: 0x23$0xffffffffffffffff,%rbx
jne      40
PiperOrigin-RevId: 416083515

--
122fbff893dc4571b3e75e4b241eb4495b925610 by Abseil Team <absl-team@google.com>:

Put namespace guard in ABSL_DECLARE_FLAG to make declaring a flag in a namespace a compiler error instead of a linker error.

PiperOrigin-RevId: 416036072

--
020fd8a20f5fa319e948846e003391fcb9e03868 by Ilya Tokar <tokarip@google.com>:

Make Cord::InlineRep::set_data unconditionally zero out memory.

Currently there is a single case where we don't zero out memory
as an optimization. Unconditional zeroing doesn't show any changes
in benchmarks, except for the unrelated improvement:

BM_CordPartialCopyToCord/1M/1              12.6ns ± 4%   12.6ns ± 4%     ~     (p=0.857 n=16+19)
BM_CordPartialCopyToCord/1M/128            44.9ns ± 7%   45.0ns ± 3%     ~     (p=0.468 n=18+17)
BM_CordPartialCopyToCord/1M/1k             64.5ns ± 4%   61.4ns ± 4%   -4.82%  (p=0.000 n=19+17)
BM_CordPartialCopyToCord/1M/8k              139ns ± 3%    128ns ±15%   -7.76%  (p=0.009 n=17+20)
BM_CordPartialCopyToCord/1M/16k             193ns ± 6%    168ns ± 6%  -13.17%  (p=0.000 n=17+17)
BM_CordPartialCopyToCord/4M/16k             199ns ± 4%    177ns ± 4%  -11.36%  (p=0.000 n=17+18)
BM_CordPartialCopyToCord/4M/32k             275ns ± 3%    250ns ± 4%   -9.00%  (p=0.000 n=18+18)
BM_CordPartialCopyToCord/4M/64k             291ns ± 4%    266ns ± 5%   -8.53%  (p=0.000 n=18+16)
BM_CordPartialCopyToCord/4M/128k            322ns ± 5%    291ns ± 4%   -9.43%  (p=0.000 n=20+18)
BM_CordPartialCopyToCord/8M/32k             281ns ± 5%    251ns ± 4%  -10.38%  (p=0.000 n=20+16)
BM_CordPartialCopyToCord/8M/64k             293ns ± 6%    267ns ± 4%   -8.87%  (p=0.000 n=16+19)
BM_CordPartialCopyToCord/8M/128k            334ns ± 3%    305ns ± 2%   -8.56%  (p=0.000 n=17+16)

This is clearly an alignmnet effect since number of the executed instructions is the same:
M_CordPartialCopyToCord/1M/1                 155 ± 0%                155 ± 0%     ~     (all samples are equal)
BM_CordPartialCopyToCord/1M/128               446 ± 0%                446 ± 0%     ~           (p=0.332 n=36+39)
BM_CordPartialCopyToCord/1M/1k                473 ± 0%                473 ± 0%     ~           (p=0.969 n=40+40)
BM_CordPartialCopyToCord/1M/8k                808 ± 0%                808 ± 0%     ~           (p=0.127 n=40+39)
BM_CordPartialCopyToCord/1M/16k               957 ± 0%                957 ± 0%     ~           (p=0.532 n=40+40)
BM_CordPartialCopyToCord/4M/16k               952 ± 0%                952 ± 0%     ~           (p=0.686 n=39+39)
BM_CordPartialCopyToCord/4M/32k             1.12k ± 0%              1.12k ± 0%     ~           (p=0.690 n=40+40)
BM_CordPartialCopyToCord/4M/64k             1.23k ± 0%              1.23k ± 0%     ~           (p=0.182 n=40+39)
BM_CordPartialCopyToCord/4M/128k            1.44k ± 0%              1.44k ± 0%     ~           (p=0.711 n=40+40)
BM_CordPartialCopyToCord/8M/32k             1.12k ± 0%              1.12k ± 0%     ~           (p=0.697 n=40+40)
BM_CordPartialCopyToCord/8M/64k             1.23k ± 0%              1.23k ± 0%   +0.00%        (p=0.049 n=40+40)
BM_CordPartialCopyToCord/8M/128k            1.44k ± 0%              1.44k ± 0%     ~           (p=0.507 n=40+40)

This makes code simpler and doesn't regress performance.

PiperOrigin-RevId: 415560574

--
37305b2690b31682088749e4d62f40d7095bdc54 by Derek Mauro <dmauro@google.com>:

Internal change

PiperOrigin-RevId: 415558737

--
86aaed569b9e743c1eb813a5f48def978a793db3 by Martijn Vels <mvels@google.com>:

Internal change

PiperOrigin-RevId: 415515201

--
6cdb8786cdcb4fa0b8a4b72fc98940877d1fdeff by Abseil Team <absl-team@google.com>:

Update SubmitMutexProfileData to accept wait_cycles instead of wait_timestamp

PiperOrigin-RevId: 415360871

--
9f979d307aa16ad09f214e04876cbe84395c0901 by Abseil Team <absl-team@google.com>:

absl::flat_hash_set compiles with -Wconversion -Wsign-compare

PiperOrigin-RevId: 415357498

--
9eceb14174708f15e61259d449b214a8a4c7f9e7 by Abseil Team <absl-team@google.com>:

Fix AddressIsReadable for the corner case of (aligned) addr == NULL.

PiperOrigin-RevId: 415307792

--
1a39ffe55898375e2d7f88c17c99db5a1b95b313 by Martijn Vels <mvels@google.com>:

Internal change

PiperOrigin-RevId: 415162872

--
64378549b110d5f5762185a5906c520fba70f0e7 by Abseil Team <absl-team@google.com>:

Fix a typo in the comments

PiperOrigin-RevId: 415088461

--
41aae8322e913b82710153c22b97c611fdb6e1fb by Abseil Team <absl-team@google.com>:

Switch from `connect` to `rt_sigreturn` -- the latter is much less problematic
for system call sandboxes.

PiperOrigin-RevId: 415073965

--
870c5e3388b6a35611bff538626fe7a1c8c87171 by Abseil Team <absl-team@google.com>:

Add ABSL_HAVE_HWADDRESS_SANITIZER and ABSL_HAVE_LEAK_SANITIZER

PiperOrigin-RevId: 414871189

--
f213ed60a66b58da7ac40555adfb1d529ff0a4db by Derek Mauro <dmauro@google.com>:

Remove reference to __SANITIZE_MEMORY__, which does not exist

It appears to have been copied by pattern matching from the ASAN/TSAN
code blocks.

f47662204d/gcc/cppbuiltin.c (L79-L126)

PiperOrigin-RevId: 414806587

--
b152891e73ab515f397ceb53f66c8ee2f33863ea by Abseil Team <absl-team@google.com>:

Rollback previous commit: SYS_open is not defined in certain environments.

PiperOrigin-RevId: 414521820

--
5a1cbb282331023902e1374dd0d920c4effbe47f by Abseil Team <absl-team@google.com>:

Use syscall(SYS_open, ...) instead of open() to avoid possible symbol
interposition.

Also add some warning notes.

PiperOrigin-RevId: 414508186

--
1824d6593612710aafdc599a89b0adced7d787f6 by Abseil Team <absl-team@google.com>:

Correct aarch64 macro check

The macro is __aarch64__, not __arch64__.

PiperOrigin-RevId: 414446225

--
a1536a57b64dfd53945d33a01cfc08b18c99c97b by Abseil Team <absl-team@google.com>:

Fix backwards comment in the last commit.

PiperOrigin-RevId: 414281214

--
11ac021ba779513667a31cf2563ddafc57d6d913 by Abseil Team <absl-team@google.com>:

AddressIsReadable() didn't work correctly on ARM when the given pointer was
misaligned at the end of the page.

Fix that by aligning the pointer on an 8-byte boundary before checking it.

PiperOrigin-RevId: 414203863
GitOrigin-RevId: 07240ca7822d007cdcc79f2c40bd58b2c2010348
Change-Id: If5f129194d59f5c9e5d84efd8cd9e17a70e072ab
pull/790/merge
Abseil Team 3 years ago committed by rogeeff
parent fb7dd24b18
commit 1065514ef3
  1. 24
      absl/base/config.h
  2. 2
      absl/base/internal/direct_mmap.h
  3. 27
      absl/container/internal/raw_hash_set.h
  4. 20
      absl/container/internal/raw_hash_set_benchmark.cc
  5. 102
      absl/debugging/internal/address_is_readable.cc
  6. 6
      absl/flags/declare.h
  7. 1
      absl/random/internal/randen_detect.cc
  8. 2
      absl/strings/ascii.h
  9. 19
      absl/strings/cord.cc
  10. 34
      absl/strings/cord.h
  11. 77
      absl/strings/cord_test.cc
  12. 4
      absl/strings/substitute.h
  13. 2
      absl/synchronization/mutex.cc
  14. 2
      absl/synchronization/notification.h

@ -751,8 +751,6 @@ static_assert(ABSL_INTERNAL_INLINE_NAMESPACE_STR[0] != 'h' ||
// a compiler instrumentation module and a run-time library.
#ifdef ABSL_HAVE_MEMORY_SANITIZER
#error "ABSL_HAVE_MEMORY_SANITIZER cannot be directly set."
#elif defined(__SANITIZE_MEMORY__)
#define ABSL_HAVE_MEMORY_SANITIZER 1
#elif !defined(__native_client__) && ABSL_HAVE_FEATURE(memory_sanitizer)
#define ABSL_HAVE_MEMORY_SANITIZER 1
#endif
@ -779,6 +777,28 @@ static_assert(ABSL_INTERNAL_INLINE_NAMESPACE_STR[0] != 'h' ||
#define ABSL_HAVE_ADDRESS_SANITIZER 1
#endif
// ABSL_HAVE_HWADDRESS_SANITIZER
//
// Hardware-Assisted AddressSanitizer (or HWASAN) is even faster than asan
// memory error detector which can use CPU features like ARM TBI, Intel LAM or
// AMD UAI.
#ifdef ABSL_HAVE_HWADDRESS_SANITIZER
#error "ABSL_HAVE_HWADDRESS_SANITIZER cannot be directly set."
#elif defined(__SANITIZE_HWADDRESS__)
#define ABSL_HAVE_HWADDRESS_SANITIZER 1
#elif ABSL_HAVE_FEATURE(hwaddress_sanitizer)
#define ABSL_HAVE_HWADDRESS_SANITIZER 1
#endif
// ABSL_HAVE_LEAK_SANITIZER
//
// LeakSanitizer (or lsan) is a detector of memory leaks.
#ifdef ABSL_HAVE_LEAK_SANITIZER
#error "ABSL_HAVE_LEAK_SANITIZER cannot be directly set."
#elif ABSL_HAVE_FEATURE(leak_sanitizer)
#define ABSL_HAVE_LEAK_SANITIZER 1
#endif
// ABSL_HAVE_CLASS_TEMPLATE_ARGUMENT_DEDUCTION
//
// Class template argument deduction is a language feature added in C++17.

@ -80,7 +80,7 @@ inline void* DirectMmap(void* start, size_t length, int prot, int flags, int fd,
(defined(__PPC__) && !defined(__PPC64__)) || \
(defined(__riscv) && __riscv_xlen == 32) || \
(defined(__s390__) && !defined(__s390x__)) || \
(defined(__sparc__) && !defined(__arch64__))
(defined(__sparc__) && !defined(__aarch64__))
// On these architectures, implement mmap with mmap2.
static int pagesize = 0;
if (pagesize == 0) {

@ -201,7 +201,7 @@ constexpr bool IsNoThrowSwappable(std::false_type /* is_swappable */) {
template <typename T>
uint32_t TrailingZeros(T x) {
ABSL_INTERNAL_ASSUME(x != 0);
return countr_zero(x);
return static_cast<uint32_t>(countr_zero(x));
}
// An abstraction over a bitmask. It provides an easy way to iterate through the
@ -230,7 +230,7 @@ class BitMask {
return *this;
}
explicit operator bool() const { return mask_ != 0; }
int operator*() const { return LowestBitSet(); }
uint32_t operator*() const { return LowestBitSet(); }
uint32_t LowestBitSet() const {
return container_internal::TrailingZeros(mask_) >> Shift;
}
@ -248,7 +248,7 @@ class BitMask {
uint32_t LeadingZeros() const {
constexpr int total_significant_bits = SignificantBits << Shift;
constexpr int extra_bits = sizeof(T) * 8 - total_significant_bits;
return countl_zero(mask_ << extra_bits) >> Shift;
return static_cast<uint32_t>(countl_zero(mask_ << extra_bits)) >> Shift;
}
private:
@ -360,7 +360,7 @@ struct GroupSse2Impl {
BitMask<uint32_t, kWidth> Match(h2_t hash) const {
auto match = _mm_set1_epi8(hash);
return BitMask<uint32_t, kWidth>(
_mm_movemask_epi8(_mm_cmpeq_epi8(match, ctrl)));
static_cast<uint32_t>(_mm_movemask_epi8(_mm_cmpeq_epi8(match, ctrl))));
}
// Returns a bitmask representing the positions of empty slots.
@ -368,7 +368,7 @@ struct GroupSse2Impl {
#if ABSL_INTERNAL_RAW_HASH_SET_HAVE_SSSE3
// This only works because ctrl_t::kEmpty is -128.
return BitMask<uint32_t, kWidth>(
_mm_movemask_epi8(_mm_sign_epi8(ctrl, ctrl)));
static_cast<uint32_t>(_mm_movemask_epi8(_mm_sign_epi8(ctrl, ctrl))));
#else
return Match(static_cast<h2_t>(ctrl_t::kEmpty));
#endif
@ -376,14 +376,15 @@ struct GroupSse2Impl {
// Returns a bitmask representing the positions of empty or deleted slots.
BitMask<uint32_t, kWidth> MatchEmptyOrDeleted() const {
auto special = _mm_set1_epi8(static_cast<int8_t>(ctrl_t::kSentinel));
auto special = _mm_set1_epi8(static_cast<uint8_t>(ctrl_t::kSentinel));
return BitMask<uint32_t, kWidth>(
_mm_movemask_epi8(_mm_cmpgt_epi8_fixed(special, ctrl)));
static_cast<uint32_t>(
_mm_movemask_epi8(_mm_cmpgt_epi8_fixed(special, ctrl))));
}
// Returns the number of trailing empty or deleted elements in the group.
uint32_t CountLeadingEmptyOrDeleted() const {
auto special = _mm_set1_epi8(static_cast<int8_t>(ctrl_t::kSentinel));
auto special = _mm_set1_epi8(static_cast<uint8_t>(ctrl_t::kSentinel));
return TrailingZeros(static_cast<uint32_t>(
_mm_movemask_epi8(_mm_cmpgt_epi8_fixed(special, ctrl)) + 1));
}
@ -1465,7 +1466,7 @@ class raw_hash_set {
auto seq = probe(ctrl_, hash, capacity_);
while (true) {
Group g{ctrl_ + seq.offset()};
for (int i : g.Match(H2(hash))) {
for (uint32_t i : g.Match(H2(hash))) {
if (ABSL_PREDICT_TRUE(PolicyTraits::apply(
EqualElement<K>{key, eq_ref()},
PolicyTraits::element(slots_ + seq.offset(i)))))
@ -1610,7 +1611,7 @@ class raw_hash_set {
void erase_meta_only(const_iterator it) {
assert(IsFull(*it.inner_.ctrl_) && "erasing a dangling iterator");
--size_;
const size_t index = it.inner_.ctrl_ - ctrl_;
const size_t index = static_cast<size_t>(it.inner_.ctrl_ - ctrl_);
const size_t index_before = (index - Group::kWidth) & capacity_;
const auto empty_after = Group(it.inner_.ctrl_).MatchEmpty();
const auto empty_before = Group(ctrl_ + index_before).MatchEmpty();
@ -1832,7 +1833,7 @@ class raw_hash_set {
auto seq = probe(ctrl_, hash, capacity_);
while (true) {
Group g{ctrl_ + seq.offset()};
for (int i : g.Match(H2(hash))) {
for (uint32_t i : g.Match(H2(hash))) {
if (ABSL_PREDICT_TRUE(PolicyTraits::element(slots_ + seq.offset(i)) ==
elem))
return true;
@ -1864,7 +1865,7 @@ class raw_hash_set {
auto seq = probe(ctrl_, hash, capacity_);
while (true) {
Group g{ctrl_ + seq.offset()};
for (int i : g.Match(H2(hash))) {
for (uint32_t i : g.Match(H2(hash))) {
if (ABSL_PREDICT_TRUE(PolicyTraits::apply(
EqualElement<K>{key, eq_ref()},
PolicyTraits::element(slots_ + seq.offset(i)))))
@ -1984,7 +1985,7 @@ struct HashtableDebugAccess<Set, absl::void_t<typename Set::raw_hash_set>> {
auto seq = probe(set.ctrl_, hash, set.capacity_);
while (true) {
container_internal::Group g{set.ctrl_ + seq.offset()};
for (int i : g.Match(container_internal::H2(hash))) {
for (uint32_t i : g.Match(container_internal::H2(hash))) {
if (Traits::apply(
typename Set::template EqualElement<typename Set::key_type>{
key, set.eq_ref()},

@ -330,6 +330,7 @@ void BM_Group_Match(benchmark::State& state) {
h2_t h = 1;
for (auto _ : state) {
::benchmark::DoNotOptimize(h);
::benchmark::DoNotOptimize(g);
::benchmark::DoNotOptimize(g.Match(h));
}
}
@ -339,7 +340,10 @@ void BM_Group_MatchEmpty(benchmark::State& state) {
std::array<ctrl_t, Group::kWidth> group;
Iota(group.begin(), group.end(), -4);
Group g{group.data()};
for (auto _ : state) ::benchmark::DoNotOptimize(g.MatchEmpty());
for (auto _ : state) {
::benchmark::DoNotOptimize(g);
::benchmark::DoNotOptimize(g.MatchEmpty());
}
}
BENCHMARK(BM_Group_MatchEmpty);
@ -347,7 +351,10 @@ void BM_Group_MatchEmptyOrDeleted(benchmark::State& state) {
std::array<ctrl_t, Group::kWidth> group;
Iota(group.begin(), group.end(), -4);
Group g{group.data()};
for (auto _ : state) ::benchmark::DoNotOptimize(g.MatchEmptyOrDeleted());
for (auto _ : state) {
::benchmark::DoNotOptimize(g);
::benchmark::DoNotOptimize(g.MatchEmptyOrDeleted());
}
}
BENCHMARK(BM_Group_MatchEmptyOrDeleted);
@ -355,8 +362,10 @@ void BM_Group_CountLeadingEmptyOrDeleted(benchmark::State& state) {
std::array<ctrl_t, Group::kWidth> group;
Iota(group.begin(), group.end(), -2);
Group g{group.data()};
for (auto _ : state)
for (auto _ : state) {
::benchmark::DoNotOptimize(g);
::benchmark::DoNotOptimize(g.CountLeadingEmptyOrDeleted());
}
}
BENCHMARK(BM_Group_CountLeadingEmptyOrDeleted);
@ -364,7 +373,10 @@ void BM_Group_MatchFirstEmptyOrDeleted(benchmark::State& state) {
std::array<ctrl_t, Group::kWidth> group;
Iota(group.begin(), group.end(), -2);
Group g{group.data()};
for (auto _ : state) ::benchmark::DoNotOptimize(*g.MatchEmptyOrDeleted());
for (auto _ : state) {
::benchmark::DoNotOptimize(g);
::benchmark::DoNotOptimize(*g.MatchEmptyOrDeleted());
}
}
BENCHMARK(BM_Group_MatchFirstEmptyOrDeleted);

@ -30,16 +30,12 @@ bool AddressIsReadable(const void* /* addr */) { return true; }
ABSL_NAMESPACE_END
} // namespace absl
#else
#else // __linux__ && !__ANDROID__
#include <fcntl.h>
#include <sys/stat.h>
#include <sys/syscall.h>
#include <sys/types.h>
#include <stdint.h>
#include <syscall.h>
#include <unistd.h>
#include <cerrno>
#include "absl/base/internal/errno_saver.h"
#include "absl/base/internal/raw_logging.h"
@ -47,60 +43,54 @@ namespace absl {
ABSL_NAMESPACE_BEGIN
namespace debugging_internal {
// NOTE: be extra careful about adding any interposable function calls here
// (such as open(), read(), etc.). These symbols may be interposed and will get
// invoked in contexts they don't expect.
//
// NOTE: any new system calls here may also require sandbox reconfiguration.
//
bool AddressIsReadable(const void *addr) {
int fd = 0;
// Align address on 8-byte boundary. On aarch64, checking last
// byte before inaccessible page returned unexpected EFAULT.
const uintptr_t u_addr = reinterpret_cast<uintptr_t>(addr) & ~7;
addr = reinterpret_cast<const void *>(u_addr);
// rt_sigprocmask below will succeed for this input.
if (addr == nullptr) return false;
absl::base_internal::ErrnoSaver errno_saver;
for (int j = 0; j < 2; j++) {
// Here we probe with some syscall which
// - accepts a one-byte region of user memory as input
// - tests for EFAULT before other validation
// - has no problematic side-effects
//
// connect(2) works for this. It copies the address into kernel
// memory before any validation beyond requiring an open fd.
// But a one byte address is never valid (sa_family is two bytes),
// so the call cannot succeed and change any state.
//
// This strategy depends on Linux implementation details,
// so we rely on the test to alert us if it stops working.
//
// Some discarded past approaches:
// - msync() doesn't reject PROT_NONE regions
// - write() on /dev/null doesn't return EFAULT
// - write() on a pipe requires creating it and draining the writes
//
// Use syscall(SYS_connect, ...) instead of connect() to prevent ASAN
// and other checkers from complaining about accesses to arbitrary memory.
do {
ABSL_RAW_CHECK(syscall(SYS_connect, fd, addr, 1) == -1,
"should never succeed");
} while (errno == EINTR);
if (errno == EFAULT) return false;
if (errno == EBADF) {
if (j != 0) {
// Unclear what happened.
ABSL_RAW_LOG(ERROR, "unexpected EBADF on fd %d", fd);
return false;
}
// fd 0 must have been closed. Try opening it again.
// Note: we shouldn't leak too many file descriptors here, since we expect
// to get fd==0 reopened.
fd = open("/dev/null", O_RDONLY);
if (fd == -1) {
ABSL_RAW_LOG(ERROR, "can't open /dev/null");
return false;
}
} else {
// probably EINVAL or ENOTSOCK; we got past EFAULT validation.
return true;
}
}
ABSL_RAW_CHECK(false, "unreachable");
return false;
// Here we probe with some syscall which
// - accepts an 8-byte region of user memory as input
// - tests for EFAULT before other validation
// - has no problematic side-effects
//
// rt_sigprocmask(2) works for this. It copies sizeof(kernel_sigset_t)==8
// bytes from the address into the kernel memory before any validation.
//
// The call can never succeed, since the `how` parameter is not one of
// SIG_BLOCK, SIG_UNBLOCK, SIG_SETMASK.
//
// This strategy depends on Linux implementation details,
// so we rely on the test to alert us if it stops working.
//
// Some discarded past approaches:
// - msync() doesn't reject PROT_NONE regions
// - write() on /dev/null doesn't return EFAULT
// - write() on a pipe requires creating it and draining the writes
// - connect() works but is problematic for sandboxes and needs a valid
// file descriptor
//
// This can never succeed (invalid first argument to sigprocmask).
ABSL_RAW_CHECK(syscall(SYS_rt_sigprocmask, ~0, addr, nullptr,
/*sizeof(kernel_sigset_t)*/ 8) == -1,
"unexpected success");
ABSL_RAW_CHECK(errno == EFAULT || errno == EINVAL, "unexpected errno");
return errno != EFAULT;
}
} // namespace debugging_internal
ABSL_NAMESPACE_END
} // namespace absl
#endif
#endif // __linux__ && !__ANDROID__

@ -60,6 +60,10 @@ ABSL_NAMESPACE_END
// The ABSL_DECLARE_FLAG(type, name) macro expands to:
//
// extern absl::Flag<type> FLAGS_name;
#define ABSL_DECLARE_FLAG(type, name) extern ::absl::Flag<type> FLAGS_##name
#define ABSL_DECLARE_FLAG(type, name) \
extern absl::Flag<type> FLAGS_##name; \
namespace absl /* block flags in namespaces */ {} \
/* second redeclaration is to allow applying attributes */ \
extern absl::Flag<type> FLAGS_##name
#endif // ABSL_FLAGS_DECLARE_H_

@ -40,7 +40,6 @@
#if defined(ABSL_INTERNAL_USE_X86_CPUID)
#if defined(_WIN32) || defined(_WIN64)
#include <intrin.h> // NOLINT(build/include_order)
#pragma intrinsic(__cpuid)
#else
// MSVC-equivalent __cpuid intrinsic function.
static void __cpuid(int cpu_info[4], int info_type) {

@ -133,7 +133,7 @@ inline bool ascii_isdigit(unsigned char c) { return c >= '0' && c <= '9'; }
// ascii_isprint()
//
// Determines whether the given character is printable, including whitespace.
// Determines whether the given character is printable, including spaces.
inline bool ascii_isprint(unsigned char c) { return c >= 32 && c < 127; }
// ascii_isgraph()

@ -311,11 +311,10 @@ static CordRep* CordRepFromString(std::string&& src) {
constexpr unsigned char Cord::InlineRep::kMaxInline;
inline void Cord::InlineRep::set_data(const char* data, size_t n,
bool nullify_tail) {
inline void Cord::InlineRep::set_data(const char* data, size_t n) {
static_assert(kMaxInline == 15, "set_data is hard-coded for a length of 15");
cord_internal::SmallMemmove(data_.as_chars(), data, n, nullify_tail);
cord_internal::SmallMemmove<true>(data_.as_chars(), data, n);
set_inline_size(n);
}
@ -375,7 +374,8 @@ void Cord::InlineRep::AppendTreeToTree(CordRep* tree, MethodIdentifier method) {
}
void Cord::InlineRep::AppendTree(CordRep* tree, MethodIdentifier method) {
if (tree == nullptr) return;
assert(tree != nullptr);
assert(tree->length != 0);
assert(!tree->IsCrc());
if (data_.is_tree()) {
AppendTreeToTree(tree, method);
@ -412,6 +412,7 @@ void Cord::InlineRep::PrependTreeToTree(CordRep* tree,
void Cord::InlineRep::PrependTree(CordRep* tree, MethodIdentifier method) {
assert(tree != nullptr);
assert(tree->length != 0);
assert(!tree->IsCrc());
if (data_.is_tree()) {
PrependTreeToTree(tree, method);
@ -549,7 +550,7 @@ Cord::Cord(absl::string_view src, MethodIdentifier method)
: contents_(InlineData::kDefaultInit) {
const size_t n = src.size();
if (n <= InlineRep::kMaxInline) {
contents_.set_data(src.data(), n, true);
contents_.set_data(src.data(), n);
} else {
CordRep* rep = NewTree(src.data(), n, 0);
contents_.EmplaceTree(rep, method);
@ -559,7 +560,7 @@ Cord::Cord(absl::string_view src, MethodIdentifier method)
template <typename T, Cord::EnableIfString<T>>
Cord::Cord(T&& src) : contents_(InlineData::kDefaultInit) {
if (src.size() <= InlineRep::kMaxInline) {
contents_.set_data(src.data(), src.size(), true);
contents_.set_data(src.data(), src.size());
} else {
CordRep* rep = CordRepFromString(std::forward<T>(src));
contents_.EmplaceTree(rep, CordzUpdateTracker::kConstructorString);
@ -610,7 +611,7 @@ Cord& Cord::operator=(absl::string_view src) {
// - MaybeUntrackCord must be called before set_data() clobbers cordz_info.
// - set_data() must be called before Unref(tree) as it may reference tree.
if (tree != nullptr) CordzInfo::MaybeUntrackCord(contents_.cordz_info());
contents_.set_data(data, length, true);
contents_.set_data(data, length);
if (tree != nullptr) CordRep::Unref(tree);
return *this;
}
@ -1014,9 +1015,7 @@ Cord Cord::Subcord(size_t pos, size_t new_size) const {
CordRep* tree = contents_.tree();
if (tree == nullptr) {
// sub_cord is newly constructed, no need to re-zero-out the tail of
// contents_ memory.
sub_cord.contents_.set_data(contents_.data() + pos, new_size, false);
sub_cord.contents_.set_data(contents_.data() + pos, new_size);
return sub_cord;
}

@ -763,9 +763,8 @@ class Cord {
bool empty() const;
size_t size() const;
const char* data() const; // Returns nullptr if holding pointer
void set_data(const char* data, size_t n,
bool nullify_tail); // Discards pointer, if any
char* set_data(size_t n); // Write data to the result
void set_data(const char* data, size_t n); // Discards pointer, if any
char* set_data(size_t n); // Write data to the result
// Returns nullptr if holding bytes
absl::cord_internal::CordRep* tree() const;
absl::cord_internal::CordRep* as_tree() const;
@ -857,7 +856,7 @@ class Cord {
bool is_profiled() const { return data_.is_tree() && data_.is_profiled(); }
// Returns the available inlined capacity, or 0 if is_tree() == true.
size_t inline_capacity() const {
size_t remaining_inline_capacity() const {
return data_.is_tree() ? 0 : kMaxInline - data_.inline_size();
}
@ -968,8 +967,8 @@ namespace cord_internal {
// Fast implementation of memmove for up to 15 bytes. This implementation is
// safe for overlapping regions. If nullify_tail is true, the destination is
// padded with '\0' up to 16 bytes.
inline void SmallMemmove(char* dst, const char* src, size_t n,
bool nullify_tail = false) {
template <bool nullify_tail = false>
inline void SmallMemmove(char* dst, const char* src, size_t n) {
if (n >= 8) {
assert(n <= 16);
uint64_t buf1;
@ -1006,22 +1005,16 @@ inline void SmallMemmove(char* dst, const char* src, size_t n,
}
// Does non-template-specific `CordRepExternal` initialization.
// Expects `data` to be non-empty.
// Requires `data` to be non-empty.
void InitializeCordRepExternal(absl::string_view data, CordRepExternal* rep);
// Creates a new `CordRep` that owns `data` and `releaser` and returns a pointer
// to it, or `nullptr` if `data` was empty.
// to it. Requires `data` to be non-empty.
template <typename Releaser>
// NOLINTNEXTLINE - suppress clang-tidy raw pointer return.
CordRep* NewExternalRep(absl::string_view data, Releaser&& releaser) {
assert(!data.empty());
using ReleaserType = absl::decay_t<Releaser>;
if (data.empty()) {
// Never create empty external nodes.
InvokeReleaser(Rank0{}, ReleaserType(std::forward<Releaser>(releaser)),
data);
return nullptr;
}
CordRepExternal* rep = new CordRepExternalImpl<ReleaserType>(
std::forward<Releaser>(releaser), 0);
InitializeCordRepExternal(data, rep);
@ -1041,10 +1034,15 @@ inline CordRep* NewExternalRep(absl::string_view data,
template <typename Releaser>
Cord MakeCordFromExternal(absl::string_view data, Releaser&& releaser) {
Cord cord;
if (auto* rep = ::absl::cord_internal::NewExternalRep(
data, std::forward<Releaser>(releaser))) {
cord.contents_.EmplaceTree(rep,
if (ABSL_PREDICT_TRUE(!data.empty())) {
cord.contents_.EmplaceTree(::absl::cord_internal::NewExternalRep(
data, std::forward<Releaser>(releaser)),
Cord::MethodIdentifier::kMakeCordFromExternal);
} else {
using ReleaserType = absl::decay_t<Releaser>;
cord_internal::InvokeReleaser(
cord_internal::Rank0{}, ReleaserType(std::forward<Releaser>(releaser)),
data);
}
return cord;
}

@ -1370,31 +1370,64 @@ TEST_P(CordTest, ConstructFromExternalNonTrivialReleaserDestructor) {
}
TEST_P(CordTest, ConstructFromExternalReferenceQualifierOverloads) {
struct Releaser {
void operator()(absl::string_view) & { *lvalue_invoked = true; }
void operator()(absl::string_view) && { *rvalue_invoked = true; }
enum InvokedAs { kMissing, kLValue, kRValue };
enum CopiedAs { kNone, kMove, kCopy };
struct Tracker {
CopiedAs copied_as = kNone;
InvokedAs invoked_as = kMissing;
void Record(InvokedAs rhs) {
ASSERT_EQ(invoked_as, kMissing);
invoked_as = rhs;
}
bool* lvalue_invoked;
bool* rvalue_invoked;
};
void Record(CopiedAs rhs) {
if (copied_as == kNone || rhs == kCopy) copied_as = rhs;
}
} tracker;
bool lvalue_invoked = false;
bool rvalue_invoked = false;
Releaser releaser = {&lvalue_invoked, &rvalue_invoked};
(void)MaybeHardened(absl::MakeCordFromExternal("", releaser));
EXPECT_FALSE(lvalue_invoked);
EXPECT_TRUE(rvalue_invoked);
rvalue_invoked = false;
class Releaser {
public:
explicit Releaser(Tracker* tracker) : tr_(tracker) { *tracker = Tracker(); }
Releaser(Releaser&& rhs) : tr_(rhs.tr_) { tr_->Record(kMove); }
Releaser(const Releaser& rhs) : tr_(rhs.tr_) { tr_->Record(kCopy); }
(void)MaybeHardened(absl::MakeCordFromExternal("dummy", releaser));
EXPECT_FALSE(lvalue_invoked);
EXPECT_TRUE(rvalue_invoked);
rvalue_invoked = false;
// NOLINTNEXTLINE: suppress clang-tidy std::move on trivially copyable type.
(void)MaybeHardened(absl::MakeCordFromExternal("dummy", std::move(releaser)));
EXPECT_FALSE(lvalue_invoked);
EXPECT_TRUE(rvalue_invoked);
void operator()(absl::string_view) & { tr_->Record(kLValue); }
void operator()(absl::string_view) && { tr_->Record(kRValue); }
private:
Tracker* tr_;
};
const Releaser releaser1(&tracker);
(void)MaybeHardened(absl::MakeCordFromExternal("", releaser1));
EXPECT_EQ(tracker.copied_as, kCopy);
EXPECT_EQ(tracker.invoked_as, kRValue);
const Releaser releaser2(&tracker);
(void)MaybeHardened(absl::MakeCordFromExternal("", releaser2));
EXPECT_EQ(tracker.copied_as, kCopy);
EXPECT_EQ(tracker.invoked_as, kRValue);
Releaser releaser3(&tracker);
(void)MaybeHardened(absl::MakeCordFromExternal("", std::move(releaser3)));
EXPECT_EQ(tracker.copied_as, kMove);
EXPECT_EQ(tracker.invoked_as, kRValue);
Releaser releaser4(&tracker);
(void)MaybeHardened(absl::MakeCordFromExternal("dummy", releaser4));
EXPECT_EQ(tracker.copied_as, kCopy);
EXPECT_EQ(tracker.invoked_as, kRValue);
const Releaser releaser5(&tracker);
(void)MaybeHardened(absl::MakeCordFromExternal("dummy", releaser5));
EXPECT_EQ(tracker.copied_as, kCopy);
EXPECT_EQ(tracker.invoked_as, kRValue);
Releaser releaser6(&tracker);
(void)MaybeHardened(absl::MakeCordFromExternal("foo", std::move(releaser6)));
EXPECT_EQ(tracker.copied_as, kMove);
EXPECT_EQ(tracker.invoked_as, kRValue);
}
TEST_P(CordTest, ExternalMemoryBasicUsage) {

@ -159,8 +159,8 @@ class Arg {
Arg(Hex hex); // NOLINT(runtime/explicit)
Arg(Dec dec); // NOLINT(runtime/explicit)
// vector<bool>::reference and const_reference require special help to
// convert to `AlphaNum` because it requires two user defined conversions.
// vector<bool>::reference and const_reference require special help to convert
// to `Arg` because it requires two user defined conversions.
template <typename T,
absl::enable_if_t<
std::is_class<T>::value &&

@ -2327,7 +2327,7 @@ ABSL_ATTRIBUTE_NOINLINE void Mutex::UnlockSlow(SynchWaitParams *waitp) {
base_internal::CycleClock::Now() - enqueue_timestamp;
mutex_tracer("slow release", this, wait_cycles);
ABSL_TSAN_MUTEX_PRE_DIVERT(this, 0);
submit_profile_data(enqueue_timestamp);
submit_profile_data(wait_cycles);
ABSL_TSAN_MUTEX_POST_DIVERT(this, 0);
}
}

@ -22,7 +22,7 @@
// The `Notification` object maintains a private boolean "notified" state that
// transitions to `true` at most once. The `Notification` class provides the
// following primary member functions:
// * `HasBeenNotified() `to query its state
// * `HasBeenNotified()` to query its state
// * `WaitForNotification*()` to have threads wait until the "notified" state
// is `true`.
// * `Notify()` to set the notification's "notified" state to `true` and

Loading…
Cancel
Save