1. The initial `upb_MemBlock` and the `upb_Arena` share the same cache line, and they're both set on init at the same time
2. When using a demand-paged OS, the `upb_Arena`'s init will not eagerly fault the final page in the allocation. This is not a real concern for tiny malloc-ed blocks, but if an initial block is provided, this could save an entire page of overhead for a user who provided a large virtual region as the initial block, intending that only memory used by bumping the pointer would actually get paged in. This also helps avoid TLB misses for such a case.
PiperOrigin-RevId: 714091676
The API of upb_alloc_func implies this would already be the case, and this is useful for calling `free_sized` in the future, which can be faster for some allocators as it allows skipping a bucket lookup.
PiperOrigin-RevId: 713536794
Also changed `space_allocated` to a `uintptr_t` since it's a sum of `size_t`s, and unfortunately due to a lack of `_Generic` availability in default msvc it needs a conversion when being added.
PiperOrigin-RevId: 712994963
There's already path compression which guarantees amortized fast times (halving the cost of subsequent lookups, alas not the inverse ackermann), but there's still no need to redo work and acquire/release atomics the whole way along the path. This also takes advantage of the fast-path relaxed-only read for querying the root of a root node.
PiperOrigin-RevId: 712770023
* Add acquire/release where necessary for all atomic ops
* Add sentinel member to ensure safe publication when tsan is active; tsan will not catch the previous errors without this member.
* For all operations using relaxed memory order, comment why relaxed order is safe
* Add a test that exercises racy fuses and space allocated checks without mutexes or other memory barriers from the test harness. This test proved the existence of several races not caught by the existing tests, including one with a confident comment about why relaxed memory order was safe.
* Add a test that exercises racing allocation and destruction among fused arenas, which doesn't use locks and substitutes a custom allocator that verifies its memory blocks.
Test coverage and assert/tsan instrumentation is now sufficient to cause test failures if any call site is further relaxed.
PiperOrigin-RevId: 712751905
We no longer need to traverse the linked list of blocks to check allocated space, which means we also no longer need atomics in the linked list or even its head. This is especially beneficial as the previous implementation contained a race where we could dereference uninitialized memory; because the setting of the `next` pointers did not use release semantics and the reading of them in `SpaceAllocated` reads with relaxed order, there's no guarantee that `size` has actually been initialized - but worse, *there is also no guarantee that `next` has been!*. Simplified:
```
AddBlock:
1 ptr = malloc();
2 ptr->size = 123;
3 ptr->next = ai->blocks;
4 ai->blocks = ptr (release order);
```
```
SpaceAllocated:
5 block = ai->blocks (relaxed order)
6 block->size (acquire, but probably by accident)
7 block = block->next (relaxed order)
```
So I think a second thread calling SpaceAllocated could see the order 1, 4, 5, 6, 7, 2, 3 and read uninitialized memory - there is no data-dependency relationship or happens-before edge that this order violates, and so it would be valid for a compiler+hardware to produce.
In reality, operation 4 will produce an `stlr` on arm (forcing an order of 1, 2, 3 before 4), and `block->next` has a data dependency on `ai->blocks` which would force an ordering in the hardware between 5->6 and 5->7 even for regular `ldr` instructions.
Delete arena contains, it's private and the only user is its own test.
PiperOrigin-RevId: 709918443
This didn't require any change to the algorithm, except to mark one extra (immutable) member as atomic. It did require changing the tests though.
PiperOrigin-RevId: 690751567
This also increases compliance by adding `default_applicable_licenses` to several `BUILD` files that previously did not have it.
PiperOrigin-RevId: 670784686
This CL is mostly a no-op, except that now google3-only code is actually stripped from OSS, instead of being preserved in `# begin:google_only` blocks.
This follows the conventions of the greater Copybara ecosystem.
PiperOrigin-RevId: 669513564
This should significantly reduce the size of large arenas. Previously, a large arena would nearly double in size if the most recent block filled up. This could end up wasting large amounts of memory. After this CL, we will waste at most the max block size, which defaults to 32k.
This more or less matches the behavior of the C++ arena.
PiperOrigin-RevId: 647802280
This is needed to make protobuf/bazel package minimal for other proto rules.
Keep 4 public bzl files in upb/bazel. They end up under protobuf/bazel, and they are legitimately used by other repositories.
Move upb_proto_library_internal/* under bazel/private. Those are utilities used in the rules. Moving them one level deeper makes protobuf/bazel package clean for other rules.
Move build_defs.bzl and amalgamation under /upb/bazel. Those are utilities used in the build.
Move lua.BUILD and python* uner /python/dist. Those are used in the WORKSPACE dependency setup.
PiperOrigin-RevId: 621442236
The upb libraries can also be accessed from Kotlin Native code, which
understands only C headers, not C++. By adding these `#ifdef` directives, the
C++ headers will appear to be empty in that case.
PiperOrigin-RevId: 599593286
To satisfy the layering check, we need to depend on :gtest for the headers, in
addition to :gtest_main which provides the main() function.
There are a bunch of formatting changes as a side effect of this, but they
should be harmless.
PiperOrigin-RevId: 594318263
This change moves almost everything in the `upb/` directory up one level, so
that for example `upb/upb/generated_code_support.h` becomes just
`upb/generated_code_support.h`. The only exceptions I made to this were that I
left `upb/cmake` and `upb/BUILD` where they are, mostly because that avoids
conflict with other files and the current locations seem reasonable for now.
The `python/` directory is a little bit of a challenge because we had to merge
the existing directory there with `upb/python/`. I made `upb/python/BUILD` into
the BUILD file for the merged directory, and it effectively loads the contents
of the other BUILD file via `python/build_targets.bzl`, but I plan to clean
this up soon.
PiperOrigin-RevId: 568651768
Remove array.h and map.h as hdrs from :collections_internal
Remove alloc.h and arena.h as hdrs from :mem_internal (and add them to :mem)
Remove common.h and decode.h and encode.h as hdrs from :wire_internal
Lock down the visibility of :wire_internal to upb-only
Merge :mini_descriptor_encode_internal into :mini_descriptor_internal
PiperOrigin-RevId: 558235138
Clang and GCC differ on how they detect Address Sanitizer. Support both.
Closes#1424
COPYBARA_INTEGRATE_REVIEW=https://github.com/protocolbuffers/upb/pull/1424 from protocolbuffers:asan-clang 491a5ee4cfd24c8eb281f894de0cf4384525c46a
PiperOrigin-RevId: 553805994
The next in a series of CLs to split upb/BUILD into subdirs.
Create mem/internal/
Delete the deprecated upb/arena.h and upb/alloc.h stub headers
PiperOrigin-RevId: 552864952