Creation of LTS branch "lts_2018_12_18"

- 44b0fafc62 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 926bfeb9ff Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 13327debeb Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 3088e76c59 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - f6ae816808 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - a06c4a1d90 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 7b46e1d31a Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 070f6e47b3 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 7990fd459e Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - f95179062e Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - cc8dcd307b Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - a705aa78dc Merge pull request #194 from Mizux/windows by Xiaoyi Zhang <zhangxy988@gmail.com>
  - a4c3ffff11 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 0117457865 Merge pull request #201 from ccawley2011/fix-byteswap by Matt Calabrese <38107210+mattcalabrese-google@users.noreply.github.com>
  - f86f941385 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 94c298e2a0 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 0884a6a04e Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - c16d5557cd Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 45221ccc4e Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 2019e17a52 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 5b70a8910b Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - a00bdd176d Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - f340f773ed Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 445998d7ac Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - e821380d69 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - f21d187b80 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 5441bbe1db Fix code snippet in comment (#174) by Loo Rong Jie <loorongjie@gmail.com>
  - 5aae0cffae Fix CMake build (#173) by Stephan Dollberg <stephan.dollberg@gmail.com>
  - 48cd2c3f35 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - e291c279e4 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - e01d95528e Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 8ff1374008 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 02451914b9 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 921fd5cf02 Merge pull request #166 from rongjiecomputer/cmake-test by Gennadiy Civil <gennadiycivil@users.noreply.github.com>
  - fb462224c0 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - c075ad3216 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 0f4bc96675 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 6c7e5ffc43 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - d6df769173 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 28080f5f05 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 9c987f429b Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 5e7d459eec Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - bed5bd6e18 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - fefc83638f Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - d8cfe9f2a7 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - ad5c960b2e Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 86f0fe93ad Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - f0f15c2778 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 29ff6d4860 Removed "warning treated as error" flag from MSVC (#153) by vocaviking <vocaviking@users.noreply.github.com>
  - 083d04dd4a Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - bea85b5273 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 8f96be6ca6 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 92e07e5590 Merge pull request #152 from clnperez/fix-multi-defines-p... by Derek Mauro <761129+derekmauro@users.noreply.github.com>
  - 2125e6444a Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 9acad869d2 Merge pull request #150 from OlafvdSpek/patch-2 by Jonathan Cohen <cohenjon@google.com>
  - c2e00d3419 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 9e060686d1 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 7aa411ceaf Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 2c5af55ed3 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 44aa275286 Merge pull request #143 from rongjiecomputer/kernel by Xiaoyi Zhang <zhangxy988@gmail.com>
  - 42f22a2840 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - b973bc53ef Merge pull request #139 from siepkes/smartos-support by ahedberg <ahedberg@google.com>
  - e0def7473e Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - f826f1d489 Merge pull request #138 from edbaunton/remove-deprecated-... by ahedberg <ahedberg@google.com>
  - 7b50a4a94b Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - a5030ca512 Merge pull request #144 from rongjiecomputer/winsock2 by Xiaoyi Zhang <zhangxy988@gmail.com>
  - 02687955b7 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 8f612ebb15 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 134496a31d Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - ba8d6cf077 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - be1e84b988 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 16ac2ec2e3 Merge pull request #134 from rongjiecomputer/cmake by Alex Strelnikov <strel@google.com>
  - 7efd8dc0f1 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 87a4c07856 Export of internal Abseil changes. by Abseil Team <absl-team@google.com>
  - 4491d606df Export of internal Abseil changes. by Abseil Team <absl-team@google.com>

GitOrigin-RevId: 44b0fafc62
Change-Id: I2c427b5b41b2d34101922048b00f3d9dafcb498d
pull/366/head 20181200
Abseil Team 6 years ago committed by Ashley Hedberg
parent 6c7de165d1
commit fcb104594b
  1. 10
      .gitignore
  2. 145
      CMake/AbseilConfigureCopts.cmake
  3. 225
      CMake/AbseilHelpers.cmake
  4. 2
      CMake/DownloadGTest.cmake
  5. 26
      CMakeLists.txt
  6. 47
      CONTRIBUTING.md
  7. 10
      README.md
  8. 24
      WORKSPACE
  9. 7
      absl/BUILD.bazel
  10. 1
      absl/CMakeLists.txt
  11. 72
      absl/algorithm/CMakeLists.txt
  12. 4
      absl/algorithm/algorithm.h
  13. 88
      absl/algorithm/container.h
  14. 15
      absl/algorithm/container_test.cc
  15. 70
      absl/base/BUILD.bazel
  16. 588
      absl/base/CMakeLists.txt
  17. 57
      absl/base/attributes.h
  18. 4
      absl/base/bit_cast_test.cc
  19. 10
      absl/base/call_once.h
  20. 4
      absl/base/call_once_test.cc
  21. 57
      absl/base/casts.h
  22. 57
      absl/base/config.h
  23. 80
      absl/base/exception_safety_testing_test.cc
  24. 4
      absl/base/inline_variable_test.cc
  25. 4
      absl/base/inline_variable_test_a.cc
  26. 4
      absl/base/inline_variable_test_b.cc
  27. 4
      absl/base/internal/atomic_hook.h
  28. 195
      absl/base/internal/bits.h
  29. 97
      absl/base/internal/bits_test.cc
  30. 4
      absl/base/internal/cycleclock.cc
  31. 4
      absl/base/internal/cycleclock.h
  32. 24
      absl/base/internal/direct_mmap.h
  33. 29
      absl/base/internal/endian.h
  34. 24
      absl/base/internal/endian_test.cc
  35. 4
      absl/base/internal/exception_safety_testing.cc
  36. 351
      absl/base/internal/exception_safety_testing.h
  37. 2
      absl/base/internal/exception_testing.h
  38. 4
      absl/base/internal/hide_ptr.h
  39. 4
      absl/base/internal/identity.h
  40. 4
      absl/base/internal/inline_variable_testing.h
  41. 4
      absl/base/internal/invoke.h
  42. 24
      absl/base/internal/low_level_alloc.cc
  43. 11
      absl/base/internal/low_level_alloc.h
  44. 4
      absl/base/internal/low_level_alloc_test.cc
  45. 4
      absl/base/internal/low_level_scheduling.h
  46. 22
      absl/base/internal/raw_logging.cc
  47. 49
      absl/base/internal/raw_logging.h
  48. 4
      absl/base/internal/scheduling_mode.h
  49. 51
      absl/base/internal/spinlock.cc
  50. 8
      absl/base/internal/spinlock.h
  51. 52
      absl/base/internal/spinlock_benchmark.cc
  52. 72
      absl/base/internal/spinlock_linux.inc
  53. 13
      absl/base/internal/spinlock_wait.cc
  54. 4
      absl/base/internal/spinlock_wait.h
  55. 4
      absl/base/internal/sysinfo.cc
  56. 4
      absl/base/internal/sysinfo.h
  57. 4
      absl/base/internal/sysinfo_test.cc
  58. 14
      absl/base/internal/thread_identity.cc
  59. 4
      absl/base/internal/thread_identity.h
  60. 4
      absl/base/internal/thread_identity_test.cc
  61. 8
      absl/base/internal/throw_delegate.cc
  62. 4
      absl/base/internal/throw_delegate.h
  63. 123
      absl/base/internal/unaligned_access.h
  64. 4
      absl/base/internal/unscaledcycleclock.cc
  65. 4
      absl/base/internal/unscaledcycleclock.h
  66. 4
      absl/base/invoke_test.cc
  67. 6
      absl/base/log_severity.h
  68. 18
      absl/base/macros.h
  69. 31
      absl/base/raw_logging_test.cc
  70. 7
      absl/base/spinlock_test_common.cc
  71. 13
      absl/base/thread_annotations.h
  72. 39
      absl/compiler_config_setting.bzl
  73. 498
      absl/container/BUILD.bazel
  74. 701
      absl/container/CMakeLists.txt
  75. 376
      absl/container/fixed_array.h
  76. 119
      absl/container/fixed_array_exception_safety_test.cc
  77. 213
      absl/container/fixed_array_test.cc
  78. 582
      absl/container/flat_hash_map.h
  79. 243
      absl/container/flat_hash_map_test.cc
  80. 491
      absl/container/flat_hash_set.h
  81. 128
      absl/container/flat_hash_set_test.cc
  82. 1251
      absl/container/inlined_vector.h
  83. 2
      absl/container/inlined_vector_benchmark.cc
  84. 78
      absl/container/inlined_vector_test.cc
  85. 177
      absl/container/internal/compressed_tuple.h
  86. 168
      absl/container/internal/compressed_tuple_test.cc
  87. 407
      absl/container/internal/container_memory.h
  88. 190
      absl/container/internal/container_memory_test.cc
  89. 145
      absl/container/internal/hash_function_defaults.h
  90. 303
      absl/container/internal/hash_function_defaults_test.cc
  91. 74
      absl/container/internal/hash_generator_testing.cc
  92. 152
      absl/container/internal/hash_generator_testing.h
  93. 184
      absl/container/internal/hash_policy_testing.h
  94. 45
      absl/container/internal/hash_policy_testing_test.cc
  95. 191
      absl/container/internal/hash_policy_traits.h
  96. 144
      absl/container/internal/hash_policy_traits_test.cc
  97. 110
      absl/container/internal/hashtable_debug.h
  98. 83
      absl/container/internal/hashtable_debug_hooks.h
  99. 740
      absl/container/internal/layout.h
  100. 1557
      absl/container/internal/layout_test.cc
  101. Some files were not shown because too many files have changed in this diff Show More

10
.gitignore vendored

@ -1,2 +1,12 @@
# Ignore all bazel-* symlinks.
/bazel-*
# Ignore Bazel verbose explanations
--verbose_explanations
# Ignore CMake usual build directory
build
# Ignore Vim files
*.swp
# Ignore QtCreator Project file
CMakeLists.txt.user
# Ignore VS Code files
.vscode/*

@ -0,0 +1,145 @@
# Abseil-specific compiler flags. See absl/copts.bzl for description.
# DO NOT CHANGE THIS FILE WITHOUT THE CORRESPONDING CHANGE TO absl/copts.bzl
list(APPEND GCC_FLAGS
-Wall
-Wextra
-Wcast-qual
-Wconversion-null
-Wmissing-declarations
-Woverlength-strings
-Wpointer-arith
-Wunused-local-typedefs
-Wunused-result
-Wvarargs
-Wwrite-strings
-Wno-sign-compare
)
list(APPEND GCC_TEST_FLAGS
-Wno-conversion-null
-Wno-missing-declarations
-Wno-sign-compare
-Wno-unused-function
-Wno-unused-parameter
-Wno-unused-private-field
)
list(APPEND LLVM_FLAGS
-Wall
-Wextra
-Weverything
-Wno-c++98-compat-pedantic
-Wno-conversion
-Wno-covered-switch-default
-Wno-deprecated
-Wno-disabled-macro-expansion
-Wno-double-promotion
-Wno-comma
-Wno-extra-semi
-Wno-packed
-Wno-padded
-Wno-sign-compare
-Wno-float-conversion
-Wno-float-equal
-Wno-format-nonliteral
-Wno-gcc-compat
-Wno-global-constructors
-Wno-exit-time-destructors
-Wno-nested-anon-types
-Wno-non-modular-include-in-module
-Wno-old-style-cast
-Wno-range-loop-analysis
-Wno-reserved-id-macro
-Wno-shorten-64-to-32
-Wno-switch-enum
-Wno-thread-safety-negative
-Wno-undef
-Wno-unknown-warning-option
-Wno-unreachable-code
-Wno-unused-macros
-Wno-weak-vtables
-Wbitfield-enum-conversion
-Wbool-conversion
-Wconstant-conversion
-Wenum-conversion
-Wint-conversion
-Wliteral-conversion
-Wnon-literal-null-conversion
-Wnull-conversion
-Wobjc-literal-conversion
-Wno-sign-conversion
-Wstring-conversion
)
list(APPEND LLVM_TEST_FLAGS
-Wno-c99-extensions
-Wno-missing-noreturn
-Wno-missing-prototypes
-Wno-missing-variable-declarations
-Wno-null-conversion
-Wno-shadow
-Wno-shift-sign-overflow
-Wno-sign-compare
-Wno-unused-function
-Wno-unused-member-function
-Wno-unused-parameter
-Wno-unused-private-field
-Wno-unused-template
-Wno-used-but-marked-unused
-Wno-zero-as-null-pointer-constant
-Wno-gnu-zero-variadic-macro-arguments
)
list(APPEND MSVC_FLAGS
/W3
/wd4005
/wd4018
/wd4068
/wd4180
/wd4244
/wd4267
/wd4800
/DNOMINMAX
/DWIN32_LEAN_AND_MEAN
/D_CRT_SECURE_NO_WARNINGS
/D_SCL_SECURE_NO_WARNINGS
/D_ENABLE_EXTENDED_ALIGNED_STORAGE
)
list(APPEND MSVC_TEST_FLAGS
/wd4101
/wd4503
)
if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU")
set(ABSL_DEFAULT_COPTS "${GCC_FLAGS}")
set(ABSL_TEST_COPTS "${GCC_FLAGS};${GCC_TEST_FLAGS}")
set(ABSL_EXCEPTIONS_FLAG "-fexceptions")
elseif("${CMAKE_CXX_COMPILER_ID}" MATCHES "Clang")
# MATCHES so we get both Clang and AppleClang
set(ABSL_DEFAULT_COPTS "${LLVM_FLAGS}")
set(ABSL_TEST_COPTS "${LLVM_FLAGS};${LLVM_TEST_FLAGS}")
set(ABSL_EXCEPTIONS_FLAG "-fexceptions")
elseif("${CMAKE_CXX_COMPILER_ID}" STREQUAL "MSVC")
set(ABSL_DEFAULT_COPTS "${MSVC_FLAGS}")
set(ABSL_TEST_COPTS "${MSVC_FLAGS};${MSVC_TEST_FLAGS}")
set(ABSL_EXCEPTIONS_FLAG "/U_HAS_EXCEPTIONS;/D_HAS_EXCEPTIONS=1;/EHsc")
else()
message(WARNING "Unknown compiler: ${CMAKE_CXX_COMPILER}. Building with no default flags")
set(ABSL_DEFAULT_COPTS "")
set(ABSL_TEST_COPTS "")
set(ABSL_EXCEPTIONS_FLAG "")
endif()
# This flag is used internally for Bazel builds and is kept here for consistency
set(ABSL_EXCEPTIONS_FLAG_LINKOPTS "")
if("${CMAKE_CXX_STANDARD}" EQUAL 98)
message(FATAL_ERROR "Abseil requires at least C++11")
elseif(NOT "${CMAKE_CXX_STANDARD}")
message(STATUS "No CMAKE_CXX_STANDARD set, assuming 11")
set(ABSL_CXX_STANDARD 11)
else()
set(ABSL_CXX_STANDARD "${CMAKE_CXX_STANDARD}")
endif()

@ -15,6 +15,7 @@
#
include(CMakeParseArguments)
include(AbseilConfigureCopts)
# The IDE folder for Abseil that will be used if Abseil is included in a CMake
# project that sets
@ -48,7 +49,11 @@ function(absl_library)
add_library(${_NAME} STATIC ${ABSL_LIB_SOURCES})
target_compile_options(${_NAME} PRIVATE ${ABSL_COMPILE_CXXFLAGS} ${ABSL_LIB_PRIVATE_COMPILE_FLAGS})
target_compile_options(${_NAME}
PRIVATE
${ABSL_LIB_PRIVATE_COMPILE_FLAGS}
${ABSL_DEFAULT_COPTS}
)
target_link_libraries(${_NAME} PUBLIC ${ABSL_LIB_PUBLIC_LIBRARIES})
target_include_directories(${_NAME}
PUBLIC ${ABSL_COMMON_INCLUDE_DIRS} ${ABSL_LIB_PUBLIC_INCLUDE_DIRS}
@ -57,12 +62,199 @@ function(absl_library)
# Add all Abseil targets to a a folder in the IDE for organization.
set_property(TARGET ${_NAME} PROPERTY FOLDER ${ABSL_IDE_FOLDER})
set_property(TARGET ${_NAME} PROPERTY CXX_STANDARD ${ABSL_CXX_STANDARD})
set_property(TARGET ${_NAME} PROPERTY CXX_STANDARD_REQUIRED ON)
if(ABSL_LIB_EXPORT_NAME)
add_library(absl::${ABSL_LIB_EXPORT_NAME} ALIAS ${_NAME})
endif()
endfunction()
# CMake function to imitate Bazel's cc_library rule.
#
# Parameters:
# NAME: name of target (see Note)
# HDRS: List of public header files for the library
# SRCS: List of source files for the library
# DEPS: List of other libraries to be linked in to the binary targets
# COPTS: List of private compile options
# DEFINES: List of public defines
# LINKOPTS: List of link options
# PUBLIC: Add this so that this library will be exported under absl:: (see Note).
# Also in IDE, target will appear in Abseil folder while non PUBLIC will be in Abseil/internal.
# TESTONLY: When added, this target will only be built if user passes -DABSL_RUN_TESTS=ON to CMake.
#
# Note:
# By default, absl_cc_library will always create a library named absl_internal_${NAME},
# and alias target absl::${NAME}.
# This is to reduce namespace pollution.
#
# absl_cc_library(
# NAME
# awesome
# HDRS
# "a.h"
# SRCS
# "a.cc"
# )
# absl_cc_library(
# NAME
# fantastic_lib
# SRCS
# "b.cc"
# DEPS
# absl_internal_awesome # not "awesome"!
# )
#
# If PUBLIC is set, absl_cc_library will instead create a target named
# absl_${NAME} and still an alias absl::${NAME}.
#
# absl_cc_library(
# NAME
# main_lib
# ...
# PUBLIC
# )
#
# User can then use the library as absl::main_lib (although absl_main_lib is defined too).
#
# TODO: Implement "ALWAYSLINK"
function(absl_cc_library)
cmake_parse_arguments(ABSL_CC_LIB
"DISABLE_INSTALL;PUBLIC;TESTONLY"
"NAME"
"HDRS;SRCS;COPTS;DEFINES;LINKOPTS;DEPS"
${ARGN}
)
if (NOT ABSL_CC_LIB_TESTONLY OR ABSL_RUN_TESTS)
if (ABSL_CC_LIB_PUBLIC)
set(_NAME "absl_${ABSL_CC_LIB_NAME}")
else()
set(_NAME "absl_internal_${ABSL_CC_LIB_NAME}")
endif()
# Check if this is a header-only library
if ("${ABSL_CC_LIB_SRCS}" STREQUAL "")
set(ABSL_CC_LIB_IS_INTERFACE 1)
else()
set(ABSL_CC_LIB_IS_INTERFACE 0)
endif()
if(NOT ABSL_CC_LIB_IS_INTERFACE)
add_library(${_NAME} STATIC "")
target_sources(${_NAME} PRIVATE ${ABSL_CC_LIB_SRCS} ${ABSL_CC_LIB_HDRS})
target_include_directories(${_NAME}
PUBLIC ${ABSL_COMMON_INCLUDE_DIRS})
target_compile_options(${_NAME}
PRIVATE ${ABSL_CC_LIB_COPTS})
target_link_libraries(${_NAME}
PUBLIC ${ABSL_CC_LIB_DEPS}
PRIVATE ${ABSL_CC_LIB_LINKOPTS}
)
target_compile_definitions(${_NAME} PUBLIC ${ABSL_CC_LIB_DEFINES})
# Add all Abseil targets to a a folder in the IDE for organization.
if(ABSL_CC_LIB_PUBLIC)
set_property(TARGET ${_NAME} PROPERTY FOLDER ${ABSL_IDE_FOLDER})
elseif(ABSL_CC_LIB_TESTONLY)
set_property(TARGET ${_NAME} PROPERTY FOLDER ${ABSL_IDE_FOLDER}/test)
else()
set_property(TARGET ${_NAME} PROPERTY FOLDER ${ABSL_IDE_FOLDER}/internal)
endif()
# INTERFACE libraries can't have the CXX_STANDARD property set
set_property(TARGET ${_NAME} PROPERTY CXX_STANDARD ${ABSL_CXX_STANDARD})
set_property(TARGET ${_NAME} PROPERTY CXX_STANDARD_REQUIRED ON)
else()
# Generating header-only library
add_library(${_NAME} INTERFACE)
target_include_directories(${_NAME}
INTERFACE ${ABSL_COMMON_INCLUDE_DIRS})
target_link_libraries(${_NAME}
INTERFACE ${ABSL_CC_LIB_DEPS} ${ABSL_CC_LIB_LINKOPTS}
)
target_compile_definitions(${_NAME} INTERFACE ${ABSL_CC_LIB_DEFINES})
endif()
add_library(absl::${ABSL_CC_LIB_NAME} ALIAS ${_NAME})
endif()
endfunction()
# absl_cc_test()
#
# CMake function to imitate Bazel's cc_test rule.
#
# Parameters:
# NAME: name of target (see Usage below)
# SRCS: List of source files for the binary
# DEPS: List of other libraries to be linked in to the binary targets
# COPTS: List of private compile options
# DEFINES: List of public defines
# LINKOPTS: List of link options
#
# Note:
# By default, absl_cc_test will always create a binary named absl_${NAME}.
# This will also add it to ctest list as absl_${NAME}.
#
# Usage:
# absl_cc_library(
# NAME
# awesome
# HDRS
# "a.h"
# SRCS
# "a.cc"
# PUBLIC
# )
#
# absl_cc_test(
# NAME
# awesome_test
# SRCS
# "awesome_test.cc"
# DEPS
# absl::awesome
# gmock
# gtest_main
# )
function(absl_cc_test)
if(NOT ABSL_RUN_TESTS)
return()
endif()
cmake_parse_arguments(ABSL_CC_TEST
""
"NAME"
"SRCS;COPTS;DEFINES;LINKOPTS;DEPS"
${ARGN}
)
set(_NAME "absl_${ABSL_CC_TEST_NAME}")
add_executable(${_NAME} "")
target_sources(${_NAME} PRIVATE ${ABSL_CC_TEST_SRCS})
target_include_directories(${_NAME}
PUBLIC ${ABSL_COMMON_INCLUDE_DIRS}
PRIVATE ${GMOCK_INCLUDE_DIRS} ${GTEST_INCLUDE_DIRS}
)
target_compile_definitions(${_NAME}
PUBLIC ${ABSL_CC_TEST_DEFINES}
)
target_compile_options(${_NAME}
PRIVATE ${ABSL_CC_TEST_COPTS}
)
target_link_libraries(${_NAME}
PUBLIC ${ABSL_CC_TEST_DEPS}
PRIVATE ${ABSL_CC_TEST_LINKOPTS}
)
# Add all Abseil targets to a a folder in the IDE for organization.
set_property(TARGET ${_NAME} PROPERTY FOLDER ${ABSL_IDE_FOLDER}/test)
set_property(TARGET ${_NAME} PROPERTY CXX_STANDARD ${ABSL_CXX_STANDARD})
set_property(TARGET ${_NAME} PROPERTY CXX_STANDARD_REQUIRED ON)
add_test(NAME ${_NAME} COMMAND ${_NAME})
endfunction()
#
# header only virtual target creation
@ -103,13 +295,15 @@ function(absl_header_library)
# Add all Abseil targets to a a folder in the IDE for organization.
set_property(TARGET ${_NAME} PROPERTY FOLDER ${ABSL_IDE_FOLDER})
set_property(TARGET ${_NAME} PROPERTY CXX_STANDARD ${ABSL_CXX_STANDARD})
set_property(TARGET ${_NAME} PROPERTY CXX_STANDARD_REQUIRED ON)
if(ABSL_HO_LIB_EXPORT_NAME)
add_library(absl::${ABSL_HO_LIB_EXPORT_NAME} ALIAS ${_NAME})
endif()
endfunction()
#
# create an abseil unit_test and add it to the executed test list
#
@ -123,7 +317,7 @@ endfunction()
#
# all tests will be register for execution with add_test()
#
# test compilation and execution is disable when BUILD_TESTING=OFF
# test compilation and execution is disable when ABSL_RUN_TESTS=OFF
#
function(absl_test)
@ -135,25 +329,32 @@ function(absl_test)
)
if(BUILD_TESTING)
if(ABSL_RUN_TESTS)
set(_NAME ${ABSL_TEST_TARGET})
set(_NAME "absl_${ABSL_TEST_TARGET}")
string(TOUPPER ${_NAME} _UPPER_NAME)
add_executable(${_NAME}_bin ${ABSL_TEST_SOURCES})
add_executable(${_NAME} ${ABSL_TEST_SOURCES})
target_compile_options(${_NAME}_bin PRIVATE ${ABSL_COMPILE_CXXFLAGS} ${ABSL_TEST_PRIVATE_COMPILE_FLAGS})
target_link_libraries(${_NAME}_bin PUBLIC ${ABSL_TEST_PUBLIC_LIBRARIES} ${ABSL_TEST_COMMON_LIBRARIES})
target_include_directories(${_NAME}_bin
target_compile_options(${_NAME}
PRIVATE
${ABSL_TEST_PRIVATE_COMPILE_FLAGS}
${ABSL_TEST_COPTS}
)
target_link_libraries(${_NAME} PUBLIC ${ABSL_TEST_PUBLIC_LIBRARIES} ${ABSL_TEST_COMMON_LIBRARIES})
target_include_directories(${_NAME}
PUBLIC ${ABSL_COMMON_INCLUDE_DIRS} ${ABSL_TEST_PUBLIC_INCLUDE_DIRS}
PRIVATE ${GMOCK_INCLUDE_DIRS} ${GTEST_INCLUDE_DIRS}
)
# Add all Abseil targets to a a folder in the IDE for organization.
set_property(TARGET ${_NAME}_bin PROPERTY FOLDER ${ABSL_IDE_FOLDER})
set_property(TARGET ${_NAME} PROPERTY FOLDER ${ABSL_IDE_FOLDER})
set_property(TARGET ${_NAME} PROPERTY CXX_STANDARD ${ABSL_CXX_STANDARD})
set_property(TARGET ${_NAME} PROPERTY CXX_STANDARD_REQUIRED ON)
add_test(${_NAME} ${_NAME}_bin)
endif(BUILD_TESTING)
add_test(NAME ${_NAME} COMMAND ${_NAME})
endif(ABSL_RUN_TESTS)
endfunction()

@ -4,7 +4,7 @@
# Download the latest googletest from Github master
configure_file(
${CMAKE_CURRENT_LIST_DIR}/CMakeLists.txt.in
googletest-download/CMakeLists.txt
${CMAKE_BINARY_DIR}/googletest-download/CMakeLists.txt
)
# Configure and build the downloaded googletest source

@ -17,9 +17,15 @@
# We require 3.0 for modern, target-based CMake. We require 3.1 for the use of
# CXX_STANDARD in our targets.
cmake_minimum_required(VERSION 3.1)
# Compiler id for Apple Clang is now AppleClang.
if (POLICY CMP0025)
cmake_policy(SET CMP0025 NEW)
endif()
project(absl)
list(APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/CMake)
list(APPEND CMAKE_MODULE_PATH ${CMAKE_CURRENT_LIST_DIR}/CMake)
include(GNUInstallDirs)
include(AbseilHelpers)
@ -32,8 +38,16 @@ if (MSVC)
# /wd4244 conversion from 'type1' to 'type2'
# /wd4267 conversion from 'size_t' to 'type2'
# /wd4800 force value to bool 'true' or 'false' (performance warning)
add_compile_options(/W3 /WX /wd4005 /wd4068 /wd4244 /wd4267 /wd4800)
add_definitions(/DNOMINMAX /DWIN32_LEAN_AND_MEAN=1 /D_CRT_SECURE_NO_WARNINGS /D_SCL_SECURE_NO_WARNINGS)
add_compile_options(/W3 /wd4005 /wd4068 /wd4244 /wd4267 /wd4800)
# /D_ENABLE_EXTENDED_ALIGNED_STORAGE Introduced in VS 2017 15.8, before the
# member type would non-conformingly have an alignment of only alignof(max_align_t).
add_definitions(
/DNOMINMAX
/DWIN32_LEAN_AND_MEAN=1
/D_CRT_SECURE_NO_WARNINGS
/D_SCL_SECURE_NO_WARNINGS
/D_ENABLE_EXTENDED_ALIGNED_STORAGE
)
else()
set(ABSL_STD_CXX_FLAG "-std=c++11" CACHE STRING "c++ std flag (default: c++11)")
endif()
@ -60,6 +74,12 @@ set(CMAKE_CXX_FLAGS "${ABSL_STD_CXX_FLAG} ${CMAKE_CXX_FLAGS}")
# -fexceptions
set(ABSL_EXCEPTIONS_FLAG "${CMAKE_CXX_EXCEPTIONS}")
if("${CMAKE_CXX_COMPILER_ID}" MATCHES "Clang")
set(ABSL_USING_CLANG ON)
else()
set(ABSL_USING_CLANG OFF)
endif()
# find dependencies
## pthread
find_package(Threads REQUIRED)

@ -18,6 +18,53 @@ You generally only need to submit a CLA once, so if you've already submitted one
(even if it was for a different project), you probably don't need to do it
again.
## Contribution Guidelines
Potential contributors sometimes ask us if the Abseil project is the appropriate
home for their utility library code or for specific functions implementing
missing portions of the standard. Often, the answer to this question is "no".
We’d like to articulate our thinking on this issue so that our choices can be
understood by everyone and so that contributors can have a better intuition
about whether Abseil might be interested in adopting a new library.
### Priorities
Although our mission is to augment the C++ standard library, our goal is not to
provide a full forward-compatible implementation of the latest standard. For us
to consider a library for inclusion in Abseil, it is not enough that a library
is useful. We generally choose to release a library when it meets at least one
of the following criteria:
* **Widespread usage** - Using our internal codebase to help gauge usage, most
of the libraries we've released have tens of thousands of users.
* **Anticipated widespread usage** - Pre-adoption of some standard-compliant
APIs may not have broad adoption initially but can be expected to pick up
usage when it replaces legacy APIs. `absl::from_chars`, for example,
replaces existing code that converts strings to numbers and will therefore
likely see usage growth.
* **High impact** - APIs that provide a key solution to a specific problem,
such as `absl::FixedArray`, have higher impact than usage numbers may signal
and are released because of their importance.
* **Direct support for a library that falls under one of the above** - When we
want access to a smaller library as an implementation detail for a
higher-priority library we plan to release, we may release it, as we did
with portions of `absl/meta/type_traits.h`. One consequence of this is that
the presence of a library in Abseil does not necessarily mean that other
similar libraries would be a high priority.
### API Freeze Consequences
Via the
[Abseil Compatibility Guidelines](https://abseil.io/about/compatibility), we
have promised a large degree of API stability. In particular, we will not make
backward-incompatible changes to released APIs without also shipping a tool or
process that can upgrade our users' code. We are not yet at the point of easily
releasing such tools. Therefore, at this time, shipping a library establishes an
API contract which is borderline unchangeable. (We can add new functionality,
but we cannot easily change existing behavior.) This constraint forces us to
very carefully review all APIs that we ship.
## Coding Style
To keep the source consistent, readable, diffable and easy to merge, we use a

@ -63,10 +63,14 @@ Abseil contains the following C++ library components:
<br /> The `algorithm` library contains additions to the C++ `<algorithm>`
library and container-based versions of such algorithms.
* [`container`](absl/container/)
<br /> The `container` library contains additional STL-style containers.
<br /> The `container` library contains additional STL-style containers,
including Abseil's unordered "Swiss table" containers.
* [`debugging`](absl/debugging/)
<br /> The `debugging` library contains code useful for enabling leak
checks. Future updates will add stacktrace and symbolization utilities.
checks, and stacktrace and symbolization utilities.
* [`hash`](absl/hash/)
<br /> The `hash` library contains the hashing framework and default hash
functor implementations for hashable types in Abseil.
* [`memory`](absl/memory/)
<br /> The `memory` library contains C++11-compatible versions of
`std::make_unique()` and related memory management facilities.
@ -90,6 +94,8 @@ Abseil contains the following C++ library components:
* [`types`](absl/types/)
<br /> The `types` library contains non-container utility types, like a
C++11-compatible version of the C++17 `std::optional` type.
* [`utility`](absl/utility/)
<br /> The `utility` library contains utility and helper code.
## License

@ -1,21 +1,23 @@
workspace(name = "com_google_absl")
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
# Bazel toolchains
http_archive(
name = "bazel_toolchains",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/bazel-toolchains/archive/287b64e0a211fb7c23b74695f8d5f5205b61f4eb.tar.gz",
"https://github.com/bazelbuild/bazel-toolchains/archive/287b64e0a211fb7c23b74695f8d5f5205b61f4eb.tar.gz",
"https://mirror.bazel.build/github.com/bazelbuild/bazel-toolchains/archive/bc09b995c137df042bb80a395b73d7ce6f26afbe.tar.gz",
"https://github.com/bazelbuild/bazel-toolchains/archive/bc09b995c137df042bb80a395b73d7ce6f26afbe.tar.gz",
],
strip_prefix = "bazel-toolchains-287b64e0a211fb7c23b74695f8d5f5205b61f4eb",
sha256 = "aca8ac6afd7745027ee4a43032b51a725a61a75a30f02cc58681ee87e4dcdf4b",
strip_prefix = "bazel-toolchains-bc09b995c137df042bb80a395b73d7ce6f26afbe",
sha256 = "4329663fe6c523425ad4d3c989a8ac026b04e1acedeceb56aa4b190fa7f3973c",
)
# GoogleTest/GoogleMock framework. Used by most unit-tests.
http_archive(
name = "com_google_googletest",
urls = ["https://github.com/google/googletest/archive/4e4df226fc197c0dda6e37f5c8c3845ca1e73a49.zip"],
strip_prefix = "googletest-4e4df226fc197c0dda6e37f5c8c3845ca1e73a49",
sha256 = "d4179caf54410968d1fff0b869e7d74803dd30209ee6645ccf1ca65ab6cf5e5a",
urls = ["https://github.com/google/googletest/archive/b4d4438df9479675a632b2f11125e57133822ece.zip"], # 2018-07-16
strip_prefix = "googletest-b4d4438df9479675a632b2f11125e57133822ece",
sha256 = "5aaa5d566517cae711e2a3505ea9a6438be1b37fcaae0ebcb96ccba9aa56f23a",
)
# Google benchmark.
@ -25,11 +27,3 @@ http_archive(
strip_prefix = "benchmark-16703ff83c1ae6d53e5155df3bb3ab0bc96083be",
sha256 = "59f918c8ccd4d74b6ac43484467b500f1d64b40cc1010daa055375b322a43ba3",
)
# RE2 regular-expression framework. Used by some unit-tests.
http_archive(
name = "com_googlesource_code_re2",
urls = ["https://github.com/google/re2/archive/6cf8ccd82dbaab2668e9b13596c68183c9ecd13f.zip"],
strip_prefix = "re2-6cf8ccd82dbaab2668e9b13596c68183c9ecd13f",
sha256 = "279a852219dbfc504501775596089d30e9c0b29664ce4128b0ac4c841471a16a",
)

@ -18,11 +18,10 @@ package(default_visibility = ["//visibility:public"])
licenses(["notice"]) # Apache 2.0
config_setting(
load(":compiler_config_setting.bzl", "create_llvm_config")
create_llvm_config(
name = "llvm_compiler",
values = {
"compiler": "llvm",
},
visibility = [":__subpackages__"],
)

@ -20,6 +20,7 @@ add_subdirectory(base)
add_subdirectory(algorithm)
add_subdirectory(container)
add_subdirectory(debugging)
add_subdirectory(hash)
add_subdirectory(memory)
add_subdirectory(meta)
add_subdirectory(numeric)

@ -14,50 +14,50 @@
# limitations under the License.
#
list(APPEND ALGORITHM_PUBLIC_HEADERS
absl_cc_library(
NAME
algorithm
HDRS
"algorithm.h"
"container.h"
COPTS
${ABSL_DEFAULT_COPTS}
PUBLIC
)
#
## TESTS
#
# test algorithm_test
list(APPEND ALGORITHM_TEST_SRC
absl_cc_test(
NAME
algorithm_test
SRCS
"algorithm_test.cc"
${ALGORITHM_PUBLIC_HEADERS}
${ALGORITHM_INTERNAL_HEADERS}
)
absl_header_library(
TARGET
absl_algorithm
EXPORT_NAME
algorithm
DEPS
absl::algorithm
gmock_main
)
absl_test(
TARGET
algorithm_test
SOURCES
${ALGORITHM_TEST_SRC}
PUBLIC_LIBRARIES
absl_cc_library(
NAME
algorithm_container
HDRS
"container.h"
COPTS
${ABSL_DEFAULT_COPTS}
DEPS
absl::algorithm
absl::core_headers
absl::meta
PUBLIC
)
# test container_test
set(CONTAINER_TEST_SRC "container_test.cc")
absl_test(
TARGET
absl_cc_test(
NAME
container_test
SOURCES
${CONTAINER_TEST_SRC}
PUBLIC_LIBRARIES
absl::algorithm
SRCS
"container_test.cc"
DEPS
absl::algorithm_container
absl::base
absl::core_headers
absl::memory
absl::span
gmock_main
)

@ -27,7 +27,7 @@
#include <type_traits>
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace algorithm_internal {
@ -146,7 +146,7 @@ ForwardIterator rotate(ForwardIterator first, ForwardIterator middle,
ForwardIterator>());
}
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_ALGORITHM_ALGORITHM_H_

@ -46,6 +46,8 @@
#include <iterator>
#include <numeric>
#include <type_traits>
#include <unordered_map>
#include <unordered_set>
#include <utility>
#include <vector>
@ -54,8 +56,7 @@
#include "absl/meta/type_traits.h"
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace container_algorithm_internal {
// NOTE: it is important to defer to ADL lookup for building with C++ modules,
@ -102,6 +103,17 @@ ContainerIter<C> c_begin(C& c) { return begin(c); }
template <typename C>
ContainerIter<C> c_end(C& c) { return end(c); }
template <typename T>
struct IsUnorderedContainer : std::false_type {};
template <class Key, class T, class Hash, class KeyEqual, class Allocator>
struct IsUnorderedContainer<
std::unordered_map<Key, T, Hash, KeyEqual, Allocator>> : std::true_type {};
template <class Key, class Hash, class KeyEqual, class Allocator>
struct IsUnorderedContainer<std::unordered_set<Key, Hash, KeyEqual, Allocator>>
: std::true_type {};
} // namespace container_algorithm_internal
// PUBLIC API
@ -315,7 +327,7 @@ container_algorithm_internal::ContainerDifferenceType<const C> c_count_if(
// c_mismatch()
//
// Container-based version of the <algorithm> `std::mismatchf()` function to
// Container-based version of the <algorithm> `std::mismatch()` function to
// return the first element where two ordered containers differ.
template <typename C1, typename C2>
container_algorithm_internal::ContainerIterPairType<C1, C2>
@ -495,7 +507,7 @@ BidirectionalIterator c_copy_backward(const C& src,
// Container-based version of the <algorithm> `std::move()` function to move
// a container's elements into an iterator.
template <typename C, typename OutputIterator>
OutputIterator c_move(C& src, OutputIterator dest) {
OutputIterator c_move(C&& src, OutputIterator dest) {
return std::move(container_algorithm_internal::c_begin(src),
container_algorithm_internal::c_end(src), dest);
}
@ -635,7 +647,7 @@ container_algorithm_internal::ContainerIter<C> c_generate_n(C& c, Size n,
// Note: `c_xx()` <algorithm> container versions for `remove()`, `remove_if()`,
// and `unique()` are omitted, because it's not clear whether or not such
// functions should call erase their supplied sequences afterwards. Either
// functions should call erase on their supplied sequences afterwards. Either
// behavior would be surprising for a different set of users.
//
@ -1155,7 +1167,13 @@ bool c_includes(const C1& c1, const C2& c2, Compare&& comp) {
// Container-based version of the <algorithm> `std::set_union()` function
// to return an iterator containing the union of two containers; duplicate
// values are not copied into the output.
template <typename C1, typename C2, typename OutputIterator>
template <typename C1, typename C2, typename OutputIterator,
typename = typename std::enable_if<
!container_algorithm_internal::IsUnorderedContainer<C1>::value,
void>::type,
typename = typename std::enable_if<
!container_algorithm_internal::IsUnorderedContainer<C2>::value,
void>::type>
OutputIterator c_set_union(const C1& c1, const C2& c2, OutputIterator output) {
return std::set_union(container_algorithm_internal::c_begin(c1),
container_algorithm_internal::c_end(c1),
@ -1165,7 +1183,13 @@ OutputIterator c_set_union(const C1& c1, const C2& c2, OutputIterator output) {
// Overload of c_set_union() for performing a merge using a `comp` other than
// `operator<`.
template <typename C1, typename C2, typename OutputIterator, typename Compare>
template <typename C1, typename C2, typename OutputIterator, typename Compare,
typename = typename std::enable_if<
!container_algorithm_internal::IsUnorderedContainer<C1>::value,
void>::type,
typename = typename std::enable_if<
!container_algorithm_internal::IsUnorderedContainer<C2>::value,
void>::type>
OutputIterator c_set_union(const C1& c1, const C2& c2, OutputIterator output,
Compare&& comp) {
return std::set_union(container_algorithm_internal::c_begin(c1),
@ -1179,7 +1203,13 @@ OutputIterator c_set_union(const C1& c1, const C2& c2, OutputIterator output,
//
// Container-based version of the <algorithm> `std::set_intersection()` function
// to return an iterator containing the intersection of two containers.
template <typename C1, typename C2, typename OutputIterator>
template <typename C1, typename C2, typename OutputIterator,
typename = typename std::enable_if<
!container_algorithm_internal::IsUnorderedContainer<C1>::value,
void>::type,
typename = typename std::enable_if<
!container_algorithm_internal::IsUnorderedContainer<C2>::value,
void>::type>
OutputIterator c_set_intersection(const C1& c1, const C2& c2,
OutputIterator output) {
return std::set_intersection(container_algorithm_internal::c_begin(c1),
@ -1190,7 +1220,13 @@ OutputIterator c_set_intersection(const C1& c1, const C2& c2,
// Overload of c_set_intersection() for performing a merge using a `comp` other
// than `operator<`.
template <typename C1, typename C2, typename OutputIterator, typename Compare>
template <typename C1, typename C2, typename OutputIterator, typename Compare,
typename = typename std::enable_if<
!container_algorithm_internal::IsUnorderedContainer<C1>::value,
void>::type,
typename = typename std::enable_if<
!container_algorithm_internal::IsUnorderedContainer<C2>::value,
void>::type>
OutputIterator c_set_intersection(const C1& c1, const C2& c2,
OutputIterator output, Compare&& comp) {
return std::set_intersection(container_algorithm_internal::c_begin(c1),
@ -1205,7 +1241,13 @@ OutputIterator c_set_intersection(const C1& c1, const C2& c2,
// Container-based version of the <algorithm> `std::set_difference()` function
// to return an iterator containing elements present in the first container but
// not in the second.
template <typename C1, typename C2, typename OutputIterator>
template <typename C1, typename C2, typename OutputIterator,
typename = typename std::enable_if<
!container_algorithm_internal::IsUnorderedContainer<C1>::value,
void>::type,
typename = typename std::enable_if<
!container_algorithm_internal::IsUnorderedContainer<C2>::value,
void>::type>
OutputIterator c_set_difference(const C1& c1, const C2& c2,
OutputIterator output) {
return std::set_difference(container_algorithm_internal::c_begin(c1),
@ -1216,7 +1258,13 @@ OutputIterator c_set_difference(const C1& c1, const C2& c2,
// Overload of c_set_difference() for performing a merge using a `comp` other
// than `operator<`.
template <typename C1, typename C2, typename OutputIterator, typename Compare>
template <typename C1, typename C2, typename OutputIterator, typename Compare,
typename = typename std::enable_if<
!container_algorithm_internal::IsUnorderedContainer<C1>::value,
void>::type,
typename = typename std::enable_if<
!container_algorithm_internal::IsUnorderedContainer<C2>::value,
void>::type>
OutputIterator c_set_difference(const C1& c1, const C2& c2,
OutputIterator output, Compare&& comp) {
return std::set_difference(container_algorithm_internal::c_begin(c1),
@ -1231,7 +1279,13 @@ OutputIterator c_set_difference(const C1& c1, const C2& c2,
// Container-based version of the <algorithm> `std::set_symmetric_difference()`
// function to return an iterator containing elements present in either one
// container or the other, but not both.
template <typename C1, typename C2, typename OutputIterator>
template <typename C1, typename C2, typename OutputIterator,
typename = typename std::enable_if<
!container_algorithm_internal::IsUnorderedContainer<C1>::value,
void>::type,
typename = typename std::enable_if<
!container_algorithm_internal::IsUnorderedContainer<C2>::value,
void>::type>
OutputIterator c_set_symmetric_difference(const C1& c1, const C2& c2,
OutputIterator output) {
return std::set_symmetric_difference(
@ -1243,7 +1297,13 @@ OutputIterator c_set_symmetric_difference(const C1& c1, const C2& c2,
// Overload of c_set_symmetric_difference() for performing a merge using a
// `comp` other than `operator<`.
template <typename C1, typename C2, typename OutputIterator, typename Compare>
template <typename C1, typename C2, typename OutputIterator, typename Compare,
typename = typename std::enable_if<
!container_algorithm_internal::IsUnorderedContainer<C1>::value,
void>::type,
typename = typename std::enable_if<
!container_algorithm_internal::IsUnorderedContainer<C2>::value,
void>::type>
OutputIterator c_set_symmetric_difference(const C1& c1, const C2& c2,
OutputIterator output,
Compare&& comp) {
@ -1638,7 +1698,7 @@ OutputIt c_partial_sum(const InputSequence& input, OutputIt output_first,
output_first, std::forward<BinaryOp>(op));
}
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_ALGORITHM_CONTAINER_H_

@ -636,6 +636,21 @@ TEST(MutatingTest, Move) {
Pointee(5)));
}
TEST(MutatingTest, MoveWithRvalue) {
auto MakeRValueSrc = [] {
std::vector<std::unique_ptr<int>> src;
src.emplace_back(absl::make_unique<int>(1));
src.emplace_back(absl::make_unique<int>(2));
src.emplace_back(absl::make_unique<int>(3));
return src;
};
std::vector<std::unique_ptr<int>> dest = MakeRValueSrc();
absl::c_move(MakeRValueSrc(), std::back_inserter(dest));
EXPECT_THAT(dest, ElementsAre(Pointee(1), Pointee(2), Pointee(3), Pointee(1),
Pointee(2), Pointee(3)));
}
TEST(MutatingTest, SwapRanges) {
std::vector<int> odds = {2, 4, 6};
std::vector<int> evens = {1, 3, 5};

@ -19,6 +19,7 @@ load(
"ABSL_DEFAULT_COPTS",
"ABSL_TEST_COPTS",
"ABSL_EXCEPTIONS_FLAG",
"ABSL_EXCEPTIONS_FLAG_LINKOPTS",
)
package(default_visibility = ["//visibility:public"])
@ -29,6 +30,7 @@ cc_library(
name = "spinlock_wait",
srcs = [
"internal/spinlock_akaros.inc",
"internal/spinlock_linux.inc",
"internal/spinlock_posix.inc",
"internal/spinlock_wait.cc",
"internal/spinlock_win32.inc",
@ -73,7 +75,6 @@ cc_library(
copts = ABSL_DEFAULT_COPTS,
deps = [
":config",
":dynamic_annotations",
],
)
@ -106,6 +107,7 @@ cc_library(
"internal/identity.h",
"internal/inline_variable.h",
"internal/invoke.h",
"internal/scheduling_mode.h",
],
copts = ABSL_DEFAULT_COPTS,
visibility = [
@ -179,13 +181,13 @@ cc_library(
srcs = ["internal/throw_delegate.cc"],
hdrs = ["internal/throw_delegate.h"],
copts = ABSL_DEFAULT_COPTS + ABSL_EXCEPTIONS_FLAG,
linkopts = ABSL_EXCEPTIONS_FLAG_LINKOPTS,
visibility = [
"//absl:__subpackages__",
],
deps = [
":base",
":config",
":core_headers",
],
)
@ -193,6 +195,7 @@ cc_test(
name = "throw_delegate_test",
srcs = ["throw_delegate_test.cc"],
copts = ABSL_TEST_COPTS + ABSL_EXCEPTIONS_FLAG,
linkopts = ABSL_EXCEPTIONS_FLAG_LINKOPTS,
deps = [
":throw_delegate",
"@com_google_googletest//:gtest_main",
@ -225,6 +228,7 @@ cc_library(
srcs = ["internal/exception_safety_testing.cc"],
hdrs = ["internal/exception_safety_testing.h"],
copts = ABSL_TEST_COPTS + ABSL_EXCEPTIONS_FLAG,
linkopts = ABSL_EXCEPTIONS_FLAG_LINKOPTS,
deps = [
":base",
":config",
@ -232,7 +236,7 @@ cc_library(
"//absl/memory",
"//absl/meta:type_traits",
"//absl/strings",
"//absl/types:optional",
"//absl/utility",
"@com_google_googletest//:gtest",
],
)
@ -241,6 +245,7 @@ cc_test(
name = "exception_safety_testing_test",
srcs = ["exception_safety_testing_test.cc"],
copts = ABSL_TEST_COPTS + ABSL_EXCEPTIONS_FLAG,
linkopts = ABSL_EXCEPTIONS_FLAG_LINKOPTS,
deps = [
":exception_safety_testing",
"//absl/memory",
@ -299,6 +304,7 @@ cc_test(
size = "medium",
srcs = ["spinlock_test_common.cc"],
copts = ABSL_TEST_COPTS,
tags = ["no_test_wasm"],
deps = [
":base",
":core_headers",
@ -308,6 +314,33 @@ cc_test(
],
)
cc_library(
name = "spinlock_benchmark_common",
testonly = 1,
srcs = ["internal/spinlock_benchmark.cc"],
copts = ABSL_DEFAULT_COPTS,
visibility = [
"//absl/base:__pkg__",
],
deps = [
":base",
":base_internal",
"//absl/synchronization",
"@com_github_google_benchmark//:benchmark_main",
],
alwayslink = 1,
)
cc_binary(
name = "spinlock_benchmark",
testonly = 1,
copts = ABSL_DEFAULT_COPTS,
visibility = ["//visibility:private"],
deps = [
":spinlock_benchmark_common",
],
)
cc_library(
name = "endian",
hdrs = [
@ -337,6 +370,9 @@ cc_test(
name = "config_test",
srcs = ["config_test.cc"],
copts = ABSL_TEST_COPTS,
tags = [
"no_test_wasm",
],
deps = [
":config",
"//absl/synchronization:thread_pool",
@ -348,6 +384,9 @@ cc_test(
name = "call_once_test",
srcs = ["call_once_test.cc"],
copts = ABSL_TEST_COPTS,
tags = [
"no_test_wasm",
],
deps = [
":base",
":core_headers",
@ -362,6 +401,7 @@ cc_test(
copts = ABSL_TEST_COPTS,
deps = [
":base",
"//absl/strings",
"@com_google_googletest//:gtest_main",
],
)
@ -387,6 +427,7 @@ cc_test(
"//absl:windows": [],
"//conditions:default": ["-pthread"],
}),
tags = ["no_test_ios_x86_64"],
deps = [":malloc_internal"],
)
@ -399,6 +440,9 @@ cc_test(
"//absl:windows": [],
"//conditions:default": ["-pthread"],
}),
tags = [
"no_test_wasm",
],
deps = [
":base",
":core_headers",
@ -419,3 +463,23 @@ cc_test(
"@com_github_google_benchmark//:benchmark_main",
],
)
cc_library(
name = "bits",
hdrs = ["internal/bits.h"],
visibility = [
"//absl:__subpackages__",
],
deps = [":core_headers"],
)
cc_test(
name = "bits_test",
size = "small",
srcs = ["internal/bits_test.cc"],
copts = ABSL_TEST_COPTS,
deps = [
":bits",
"@com_google_googletest//:gtest_main",
],
)

@ -14,373 +14,401 @@
# limitations under the License.
#
list(APPEND BASE_PUBLIC_HEADERS
"attributes.h"
"call_once.h"
"casts.h"
absl_cc_library(
NAME
spinlock_wait
HDRS
"internal/scheduling_mode.h"
"internal/spinlock_wait.h"
SRCS
"internal/spinlock_akaros.inc"
"internal/spinlock_linux.inc"
"internal/spinlock_posix.inc"
"internal/spinlock_wait.cc"
"internal/spinlock_win32.inc"
DEPS
absl::core_headers
)
absl_cc_library(
NAME
config
HDRS
"config.h"
"policy_checks.h"
COPTS
${ABSL_DEFAULT_COPTS}
PUBLIC
)
absl_cc_library(
NAME
dynamic_annotations
HDRS
"dynamic_annotations.h"
"log_severity.h"
SRCS
"dynamic_annotations.cc"
COPTS
${ABSL_DEFAULT_COPTS}
DEFINES
"__CLANG_SUPPORT_DYN_ANNOTATION__"
PUBLIC
)
absl_cc_library(
NAME
core_headers
HDRS
"attributes.h"
"macros.h"
"optimization.h"
"policy_checks.h"
"port.h"
"thread_annotations.h"
COPTS
${ABSL_DEFAULT_COPTS}
DEPS
absl::config
PUBLIC
)
list(APPEND BASE_INTERNAL_HEADERS
"internal/atomic_hook.h"
"internal/cycleclock.h"
absl_cc_library(
NAME
malloc_internal
HDRS
"internal/direct_mmap.h"
"internal/endian.h"
"internal/exception_testing.h"
"internal/exception_safety_testing.h"
"internal/low_level_alloc.h"
SRCS
"internal/low_level_alloc.cc"
COPTS
${ABSL_DEFAULT_COPTS}
DEPS
absl::base
absl::config
absl::core_headers
absl::dynamic_annotations
absl::spinlock_wait
)
absl_cc_library(
NAME
base_internal
HDRS
"internal/hide_ptr.h"
"internal/identity.h"
"internal/invoke.h"
"internal/inline_variable.h"
"internal/low_level_alloc.h"
"internal/invoke.h"
COPTS
${ABSL_DEFAULT_COPTS}
)
absl_cc_library(
NAME
base
HDRS
"call_once.h"
"casts.h"
"internal/atomic_hook.h"
"internal/cycleclock.h"
"internal/low_level_scheduling.h"
"internal/per_thread_tls.h"
"internal/pretty_function.h"
"internal/raw_logging.h"
"internal/scheduling_mode.h"
"internal/spinlock.h"
"internal/spinlock_wait.h"
"internal/sysinfo.h"
"internal/thread_identity.h"
"internal/throw_delegate.h"
"internal/tsan_mutex_interface.h"
"internal/unaligned_access.h"
"internal/unscaledcycleclock.h"
)
# absl_base main library
list(APPEND BASE_SRC
"log_severity.h"
SRCS
"internal/cycleclock.cc"
"internal/raw_logging.cc"
"internal/spinlock.cc"
"internal/sysinfo.cc"
"internal/thread_identity.cc"
"internal/unscaledcycleclock.cc"
"internal/low_level_alloc.cc"
${BASE_PUBLIC_HEADERS}
${BASE_INTERNAL_HEADERS}
COPTS
${ABSL_DEFAULT_COPTS}
DEPS
absl::base_internal
absl::config
absl::core_headers
absl::dynamic_annotations
absl::spinlock_wait
PUBLIC
)
absl_library(
TARGET
absl_base
SOURCES
${BASE_SRC}
PUBLIC_LIBRARIES
absl_dynamic_annotations
absl_spinlock_wait
EXPORT_NAME
base
absl_cc_library(
NAME
throw_delegate
HDRS
"internal/throw_delegate.h"
SRCS
"internal/throw_delegate.cc"
COPTS
${ABSL_DEFAULT_COPTS}
${ABSL_EXCEPTIONS_FLAG}
DEPS
absl::base
)
# throw delegate library
set(THROW_DELEGATE_SRC "internal/throw_delegate.cc")
absl_cc_library(
NAME
exception_testing
HDRS
"internal/exception_testing.h"
COPTS
${ABSL_DEFAULT_COPTS}
DEPS
absl::config
gtest
TESTONLY
)
absl_library(
TARGET
absl_throw_delegate
SOURCES
${THROW_DELEGATE_SRC}
PUBLIC_LIBRARIES
${THROW_DELEGATE_PUBLIC_LIBRARIES}
PRIVATE_COMPILE_FLAGS
${ABSL_EXCEPTIONS_FLAG}
EXPORT_NAME
throw_delegate
absl_cc_library(
NAME
pretty_function
HDRS
"internal/pretty_function.h"
COPTS
${ABSL_DEFAULT_COPTS}
)
if(BUILD_TESTING)
# exception-safety testing library
set(EXCEPTION_SAFETY_TESTING_SRC
absl_cc_library(
NAME
exception_safety_testing
HDRS
"internal/exception_safety_testing.h"
SRCS
"internal/exception_safety_testing.cc"
)
set(EXCEPTION_SAFETY_TESTING_PUBLIC_LIBRARIES
${ABSL_TEST_COMMON_LIBRARIES}
COPTS
${ABSL_DEFAULT_COPTS}
${ABSL_EXCEPTIONS_FLAG}
DEPS
absl::base
absl::config
absl::pretty_function
absl::memory
absl::meta
absl::strings
absl::optional
absl::utility
gtest
)
absl_library(
TARGET
absl_base_internal_exception_safety_testing
SOURCES
${EXCEPTION_SAFETY_TESTING_SRC}
PUBLIC_LIBRARIES
${EXCEPTION_SAFETY_TESTING_PUBLIC_LIBRARIES}
PRIVATE_COMPILE_FLAGS
${ABSL_EXCEPTIONS_FLAG}
)
endif()
# dynamic_annotations library
set(DYNAMIC_ANNOTATIONS_SRC "dynamic_annotations.cc")
absl_library(
TARGET
absl_dynamic_annotations
SOURCES
${DYNAMIC_ANNOTATIONS_SRC}
)
# spinlock_wait library
set(SPINLOCK_WAIT_SRC "internal/spinlock_wait.cc")
absl_library(
TARGET
absl_spinlock_wait
SOURCES
${SPINLOCK_WAIT_SRC}
)
# malloc_internal library
list(APPEND MALLOC_INTERNAL_SRC
"internal/low_level_alloc.cc"
TESTONLY
)
absl_library(
TARGET
absl_malloc_internal
SOURCES
${MALLOC_INTERNAL_SRC}
PUBLIC_LIBRARIES
absl_dynamic_annotations
absl_cc_test(
NAME
absl_exception_safety_testing_test
SRCS
"exception_safety_testing_test.cc"
COPTS
${ABSL_EXCEPTIONS_FLAG}
LINKOPTS
${ABSL_EXCEPTIONS_FLAG_LINKOPTS}
DEPS
absl::exception_safety_testing
absl::memory
gtest_main
)
#
## TESTS
#
# call once test
set(ATOMIC_HOOK_TEST_SRC "internal/atomic_hook_test.cc")
set(ATOMIC_HOOK_TEST_PUBLIC_LIBRARIES absl::base)
absl_test(
TARGET
absl_cc_test(
NAME
atomic_hook_test
SOURCES
${ATOMIC_HOOK_TEST_SRC}
PUBLIC_LIBRARIES
${ATOMIC_HOOK_TEST_PUBLIC_LIBRARIES}
)
# call once test
set(CALL_ONCE_TEST_SRC "call_once_test.cc")
set(CALL_ONCE_TEST_PUBLIC_LIBRARIES absl::base absl::synchronization)
absl_test(
TARGET
call_once_test
SOURCES
${CALL_ONCE_TEST_SRC}
PUBLIC_LIBRARIES
${CALL_ONCE_TEST_PUBLIC_LIBRARIES}
SRCS
"internal/atomic_hook_test.cc"
DEPS
absl::base
absl::core_headers
gtest_main
)
# test bit_cast_test
set(BIT_CAST_TEST_SRC "bit_cast_test.cc")
absl_test(
TARGET
absl_cc_test(
NAME
bit_cast_test
SOURCES
${BIT_CAST_TEST_SRC}
SRCS
"bit_cast_test.cc"
DEPS
absl::base
absl::core_headers
gtest_main
)
# test absl_throw_delegate_test
set(THROW_DELEGATE_TEST_SRC "throw_delegate_test.cc")
set(THROW_DELEGATE_TEST_PUBLIC_LIBRARIES absl::base absl_throw_delegate)
absl_test(
TARGET
absl_cc_test(
NAME
throw_delegate_test
SOURCES
${THROW_DELEGATE_TEST_SRC}
PUBLIC_LIBRARIES
${THROW_DELEGATE_TEST_PUBLIC_LIBRARIES}
)
# test invoke_test
set(INVOKE_TEST_SRC "invoke_test.cc")
set(INVOKE_TEST_PUBLIC_LIBRARIES absl::strings)
absl_test(
TARGET
invoke_test
SOURCES
${INVOKE_TEST_SRC}
PUBLIC_LIBRARIES
${INVOKE_TEST_PUBLIC_LIBRARIES}
SRCS
"throw_delegate_test.cc"
DEPS
absl::base
absl_internal_throw_delegate
gtest_main
)
# test inline_variable_test
list(APPEND INLINE_VARIABLE_TEST_SRC
absl_cc_test(
NAME
inline_variable_test
SRCS
"internal/inline_variable_testing.h"
"inline_variable_test.cc"
"inline_variable_test_a.cc"
"inline_variable_test_b.cc"
DEPS
absl::base_internal
gtest_main
)
set(INLINE_VARIABLE_TEST_PUBLIC_LIBRARIES absl::base)
absl_test(
TARGET
inline_variable_test
SOURCES
${INLINE_VARIABLE_TEST_SRC}
PUBLIC_LIBRARIES
${INLINE_VARIABLE_TEST_PUBLIC_LIBRARIES}
absl_cc_test(
NAME
invoke_test
SRCS
"invoke_test.cc"
DEPS
absl::base_internal
absl::memory
absl::strings
gmock
gtest_main
)
# test spinlock_test_common
set(SPINLOCK_TEST_COMMON_SRC "spinlock_test_common.cc")
set(SPINLOCK_TEST_COMMON_PUBLIC_LIBRARIES absl::base absl::synchronization)
absl_test(
TARGET
absl_cc_library(
NAME
spinlock_test_common
SOURCES
${SPINLOCK_TEST_COMMON_SRC}
PUBLIC_LIBRARIES
${SPINLOCK_TEST_COMMON_PUBLIC_LIBRARIES}
SRCS
"spinlock_test_common.cc"
COPTS
${ABSL_TEST_COPTS}
DEPS
absl::base
absl::core_headers
absl::spinlock_wait
absl::synchronization
gtest
TESTONLY
)
# test spinlock_test
set(SPINLOCK_TEST_SRC "spinlock_test_common.cc")
set(SPINLOCK_TEST_PUBLIC_LIBRARIES absl::base absl::synchronization)
absl_test(
TARGET
# On bazel BUILD this target use "alwayslink = 1" which is not implemented here
absl_cc_test(
NAME
spinlock_test
SOURCES
${SPINLOCK_TEST_SRC}
PUBLIC_LIBRARIES
${SPINLOCK_TEST_PUBLIC_LIBRARIES}
SRCS
"spinlock_test_common.cc"
DEPS
absl::base
absl::core_headers
absl::spinlock_wait
absl::synchronization
gtest_main
)
absl_cc_library(
NAME
endian
HDRS
"internal/endian.h"
"internal/unaligned_access.h"
COPTS
${ABSL_DEFAULT_COPTS}
DEPS
absl::config
absl::core_headers
PUBLIC
)
# test endian_test
set(ENDIAN_TEST_SRC "internal/endian_test.cc")
absl_test(
TARGET
absl_cc_test(
NAME
endian_test
SOURCES
${ENDIAN_TEST_SRC}
SRCS
"internal/endian_test.cc"
DEPS
absl::base
absl::config
absl::endian
gtest_main
)
# test config_test
set(CONFIG_TEST_SRC "config_test.cc")
set(CONFIG_TEST_PUBLIC_LIBRARIES absl::base absl::synchronization)
absl_test(
TARGET
absl_cc_test(
NAME
config_test
SOURCES
${CONFIG_TEST_SRC}
PUBLIC_LIBRARIES
${CONFIG_TEST_PUBLIC_LIBRARIES}
SRCS
"config_test.cc"
DEPS
absl::config
absl::synchronization
gtest_main
)
absl_cc_test(
NAME
call_once_test
SRCS
"call_once_test.cc"
DEPS
absl::base
absl::core_headers
absl::synchronization
gtest_main
)
# test raw_logging_test
set(RAW_LOGGING_TEST_SRC "raw_logging_test.cc")
set(RAW_LOGGING_TEST_PUBLIC_LIBRARIES absl::base)
absl_test(
TARGET
absl_cc_test(
NAME
raw_logging_test
SOURCES
${RAW_LOGGING_TEST_SRC}
PUBLIC_LIBRARIES
${RAW_LOGGING_TEST_PUBLIC_LIBRARIES}
SRCS
"raw_logging_test.cc"
DEPS
absl::base
absl::strings
gtest_main
)
# test sysinfo_test
set(SYSINFO_TEST_SRC "internal/sysinfo_test.cc")
set(SYSINFO_TEST_PUBLIC_LIBRARIES absl::base absl::synchronization)
absl_test(
TARGET
absl_cc_test(
NAME
sysinfo_test
SOURCES
${SYSINFO_TEST_SRC}
PUBLIC_LIBRARIES
${SYSINFO_TEST_PUBLIC_LIBRARIES}
SRCS
"internal/sysinfo_test.cc"
DEPS
absl::base
absl::synchronization
gtest_main
)
# test low_level_alloc_test
set(LOW_LEVEL_ALLOC_TEST_SRC "internal/low_level_alloc_test.cc")
set(LOW_LEVEL_ALLOC_TEST_PUBLIC_LIBRARIES absl::base)
absl_test(
TARGET
absl_cc_test(
NAME
low_level_alloc_test
SOURCES
${LOW_LEVEL_ALLOC_TEST_SRC}
PUBLIC_LIBRARIES
${LOW_LEVEL_ALLOC_TEST_PUBLIC_LIBRARIES}
SRCS
"internal/low_level_alloc_test.cc"
DEPS
absl::malloc_internal
Threads::Threads
)
# test thread_identity_test
set(THREAD_IDENTITY_TEST_SRC "internal/thread_identity_test.cc")
set(THREAD_IDENTITY_TEST_PUBLIC_LIBRARIES absl::base absl::synchronization)
absl_test(
TARGET
absl_cc_test(
NAME
thread_identity_test
SOURCES
${THREAD_IDENTITY_TEST_SRC}
PUBLIC_LIBRARIES
${THREAD_IDENTITY_TEST_PUBLIC_LIBRARIES}
SRCS
"internal/thread_identity_test.cc"
DEPS
absl::base
absl::core_headers
absl::synchronization
Threads::Threads
gtest_main
)
#test exceptions_safety_testing_test
set(EXCEPTION_SAFETY_TESTING_TEST_SRC "exception_safety_testing_test.cc")
set(EXCEPTION_SAFETY_TESTING_TEST_PUBLIC_LIBRARIES
absl::base
absl_base_internal_exception_safety_testing
absl::memory
absl::meta
absl::strings
absl::optional
absl_cc_library(
NAME
bits
HDRS
"internal/bits.h"
COPTS
${ABSL_DEFAULT_COPTS}
DEPS
absl::core_headers
)
absl_test(
TARGET
absl_exception_safety_testing_test
SOURCES
${EXCEPTION_SAFETY_TESTING_TEST_SRC}
PUBLIC_LIBRARIES
${EXCEPTION_SAFETY_TESTING_TEST_PUBLIC_LIBRARIES}
PRIVATE_COMPILE_FLAGS
${ABSL_EXCEPTIONS_FLAG}
absl_cc_test(
NAME
bits_test
SRCS
"internal/bits_test.cc"
DEPS
absl::bits
gtest_main
)

@ -100,7 +100,7 @@
// ABSL_PRINTF_ATTRIBUTE
// ABSL_SCANF_ATTRIBUTE
//
// Tells the compiler to perform `printf` format std::string checking if the
// Tells the compiler to perform `printf` format string checking if the
// compiler supports it; see the 'format' attribute in
// <http://gcc.gnu.org/onlinedocs/gcc-4.7.0/gcc/Function-Attributes.html>.
//
@ -155,7 +155,12 @@
// ABSL_ATTRIBUTE_WEAK
//
// Tags a function as weak for the purposes of compilation and linking.
#if ABSL_HAVE_ATTRIBUTE(weak) || (defined(__GNUC__) && !defined(__clang__))
// Weak attributes currently do not work properly in LLVM's Windows backend,
// so disable them there. See https://bugs.llvm.org/show_bug.cgi?id=37598
// for futher information.
#if (ABSL_HAVE_ATTRIBUTE(weak) || \
(defined(__GNUC__) && !defined(__clang__))) && \
!(defined(__llvm__) && defined(_WIN32))
#undef ABSL_ATTRIBUTE_WEAK
#define ABSL_ATTRIBUTE_WEAK __attribute__((weak))
#define ABSL_HAVE_ATTRIBUTE_WEAK 1
@ -296,13 +301,13 @@
// ABSL_HAVE_ATTRIBUTE_SECTION
//
// Indicates whether labeled sections are supported. Labeled sections are not
// supported on Darwin/iOS.
// Indicates whether labeled sections are supported. Weak symbol support is
// a prerequisite. Labeled sections are not supported on Darwin/iOS.
#ifdef ABSL_HAVE_ATTRIBUTE_SECTION
#error ABSL_HAVE_ATTRIBUTE_SECTION cannot be directly set
#elif (ABSL_HAVE_ATTRIBUTE(section) || \
(defined(__GNUC__) && !defined(__clang__))) && \
!defined(__APPLE__)
!defined(__APPLE__) && ABSL_HAVE_ATTRIBUTE_WEAK
#define ABSL_HAVE_ATTRIBUTE_SECTION 1
// ABSL_ATTRIBUTE_SECTION
@ -397,17 +402,28 @@
// ABSL_MUST_USE_RESULT
//
// Tells the compiler to warn about unused return values for functions declared
// with this macro. The macro must appear as the very first part of a function
// declaration or definition:
// Tells the compiler to warn about unused results.
//
// Example:
// When annotating a function, it must appear as the first part of the
// declaration or definition. The compiler will warn if the return value from
// such a function is unused:
//
// ABSL_MUST_USE_RESULT Sprocket* AllocateSprocket();
// AllocateSprocket(); // Triggers a warning.
//
// When annotating a class, it is equivalent to annotating every function which
// returns an instance.
//
// class ABSL_MUST_USE_RESULT Sprocket {};
// Sprocket(); // Triggers a warning.
//
// Sprocket MakeSprocket();
// MakeSprocket(); // Triggers a warning.
//
// Note that references and pointers are not instances:
//
// This placement has the broadest compatibility with GCC, Clang, and MSVC, with
// both defs and decls, and with GCC-style attributes, MSVC declspec, C++11
// and C++17 attributes.
// Sprocket* SprocketPointer();
// SprocketPointer(); // Does *not* trigger a warning.
//
// ABSL_MUST_USE_RESULT allows using cast-to-void to suppress the unused result
// warning. For that, warn_unused_result is used only for clang but not for gcc.
@ -494,14 +510,27 @@
#define ABSL_XRAY_LOG_ARGS(N)
#endif
// ABSL_ATTRIBUTE_REINITIALIZES
//
// Indicates that a member function reinitializes the entire object to a known
// state, independent of the previous state of the object.
//
// The clang-tidy check bugprone-use-after-move allows member functions marked
// with this attribute to be called on objects that have been moved from;
// without the attribute, this would result in a use-after-move warning.
#if ABSL_HAVE_CPP_ATTRIBUTE(clang::reinitializes)
#define ABSL_ATTRIBUTE_REINITIALIZES [[clang::reinitializes]]
#else
#define ABSL_ATTRIBUTE_REINITIALIZES
#endif
// -----------------------------------------------------------------------------
// Variable Attributes
// -----------------------------------------------------------------------------
// ABSL_ATTRIBUTE_UNUSED
//
// Prevents the compiler from complaining about or optimizing away variables
// that appear unused.
// Prevents the compiler from complaining about variables that appear unused.
#if ABSL_HAVE_ATTRIBUTE(unused) || (defined(__GNUC__) && !defined(__clang__))
#undef ABSL_ATTRIBUTE_UNUSED
#define ABSL_ATTRIBUTE_UNUSED __attribute__((__unused__))

@ -22,7 +22,7 @@
#include "absl/base/macros.h"
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace {
template <int N>
@ -105,5 +105,5 @@ TEST(BitCast, Double) {
}
} // namespace
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl

@ -39,7 +39,7 @@
#include "absl/base/port.h"
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
class once_flag;
@ -151,11 +151,7 @@ void CallOnceImpl(std::atomic<uint32_t>* control,
old_control != kOnceRunning &&
old_control != kOnceWaiter &&
old_control != kOnceDone) {
ABSL_RAW_LOG(
FATAL,
"Unexpected value for control word: %lx. Either the control word "
"has non-static storage duration (where GoogleOnceDynamic might "
"be appropriate), or there's been a memory corruption.",
ABSL_RAW_LOG(FATAL, "Unexpected value for control word: 0x%lx",
static_cast<unsigned long>(old_control)); // NOLINT
}
}
@ -212,7 +208,7 @@ void call_once(absl::once_flag& flag, Callable&& fn, Args&&... args) {
}
}
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_BASE_CALL_ONCE_H_

@ -22,7 +22,7 @@
#include "absl/synchronization/mutex.h"
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace {
absl::once_flag once;
@ -100,5 +100,5 @@ TEST(CallOnceTest, ExecutionCount) {
}
} // namespace
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl

@ -25,12 +25,36 @@
#define ABSL_BASE_CASTS_H_
#include <cstring>
#include <memory>
#include <type_traits>
#include "absl/base/internal/identity.h"
#include "absl/base/macros.h"
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace internal_casts {
// NOTE: Not a fully compliant implementation of `std::is_trivially_copyable`.
// TODO(calabrese) Branch on implementations that directly provide
// `std::is_trivially_copyable`, create a more rigorous workaround, and publicly
// expose in meta/type_traits.
template <class T>
struct is_trivially_copyable
: std::integral_constant<
bool, std::is_destructible<T>::value&& __has_trivial_destructor(T) &&
__has_trivial_copy(T) && __has_trivial_assign(T)> {};
template <class Dest, class Source>
struct is_bitcastable
: std::integral_constant<bool,
sizeof(Dest) == sizeof(Source) &&
is_trivially_copyable<Source>::value &&
is_trivially_copyable<Dest>::value &&
std::is_default_constructible<Dest>::value> {};
} // namespace internal_casts
// implicit_cast()
//
@ -82,7 +106,7 @@ inline namespace lts_2018_06_20 {
//
// Such implicit cast chaining may be useful within template logic.
template <typename To>
inline To implicit_cast(typename absl::internal::identity_t<To> to) {
constexpr To implicit_cast(typename absl::internal::identity_t<To> to) {
return to;
}
@ -126,7 +150,32 @@ inline To implicit_cast(typename absl::internal::identity_t<To> to) {
// and reading its bits back using a different type. A `bit_cast()` avoids this
// issue by implementing its casts using `memcpy()`, which avoids introducing
// this undefined behavior.
template <typename Dest, typename Source>
//
// NOTE: The requirements here are more strict than the bit_cast of standard
// proposal p0476 due to the need for workarounds and lack of intrinsics.
// Specifically, this implementation also requires `Dest` to be
// default-constructible.
template <
typename Dest, typename Source,
typename std::enable_if<internal_casts::is_bitcastable<Dest, Source>::value,
int>::type = 0>
inline Dest bit_cast(const Source& source) {
Dest dest;
memcpy(static_cast<void*>(std::addressof(dest)),
static_cast<const void*>(std::addressof(source)), sizeof(dest));
return dest;
}
// NOTE: This overload is only picked if the requirements of bit_cast are not
// met. It is therefore UB, but is provided temporarily as previous versions of
// this function template were unchecked. Do not use this in new code.
template <
typename Dest, typename Source,
typename std::enable_if<
!internal_casts::is_bitcastable<Dest, Source>::value, int>::type = 0>
ABSL_DEPRECATED(
"absl::bit_cast type requirements were violated. Update the types being "
"used such that they are the same size and are both TriviallyCopyable.")
inline Dest bit_cast(const Source& source) {
static_assert(sizeof(Dest) == sizeof(Source),
"Source and destination types should have equal sizes.");
@ -136,7 +185,7 @@ inline Dest bit_cast(const Source& source) {
return dest;
}
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_BASE_CASTS_H_

@ -139,12 +139,18 @@
#ifdef ABSL_HAVE_THREAD_LOCAL
#error ABSL_HAVE_THREAD_LOCAL cannot be directly set
#elif defined(__APPLE__)
// Notes: Xcode's clang did not support `thread_local` until version
// 8, and even then not for all iOS < 9.0. Also, Xcode 9.3 started disallowing
// `thread_local` for 32-bit iOS simulator targeting iOS 9.x.
// `__has_feature` is only supported by Clang so it has be inside
// Notes:
// * Xcode's clang did not support `thread_local` until version 8, and
// even then not for all iOS < 9.0.
// * Xcode 9.3 started disallowing `thread_local` for 32-bit iOS simulator
// targeting iOS 9.x.
// * Xcode 10 moves the deployment target check for iOS < 9.0 to link time
// making __has_feature unreliable there.
//
// Otherwise, `__has_feature` is only supported by Clang so it has be inside
// `defined(__APPLE__)` check.
#if __has_feature(cxx_thread_local)
#if __has_feature(cxx_thread_local) && \
!(TARGET_OS_IPHONE && __IPHONE_OS_VERSION_MIN_REQUIRED < __IPHONE_9_0)
#define ABSL_HAVE_THREAD_LOCAL 1
#endif
#else // !defined(__APPLE__)
@ -199,7 +205,7 @@
#define ABSL_HAVE_INTRINSIC_INT128 1
#elif defined(__CUDACC__)
// __CUDACC_VER__ is a full version number before CUDA 9, and is defined to a
// std::string explaining that it has been removed starting with CUDA 9. We use
// string explaining that it has been removed starting with CUDA 9. We use
// nested #ifs because there is no short-circuiting in the preprocessor.
// NOTE: `__CUDACC__` could be undefined while `__CUDACC_VER__` is defined.
#if __CUDACC_VER__ >= 70000
@ -268,7 +274,8 @@
#error ABSL_HAVE_MMAP cannot be directly set
#elif defined(__linux__) || defined(__APPLE__) || defined(__FreeBSD__) || \
defined(__ros__) || defined(__native_client__) || defined(__asmjs__) || \
defined(__wasm__) || defined(__Fuchsia__)
defined(__wasm__) || defined(__Fuchsia__) || defined(__sun) || \
defined(__ASYLO__)
#define ABSL_HAVE_MMAP 1
#endif
@ -322,6 +329,8 @@
#define ABSL_HAVE_ALARM 1
#elif defined(_MSC_VER)
// feature tests for Microsoft's library
#elif defined(__EMSCRIPTEN__)
// emscripten doesn't support signals
#elif defined(__native_client__)
#else
// other standard libraries
@ -356,6 +365,18 @@
#error "absl endian detection needs to be set up for your compiler"
#endif
// MacOS 10.13 doesn't let you use <any>, <optional>, or <variant> even though
// the headers exist and are publicly noted to work. See
// https://github.com/abseil/abseil-cpp/issues/207 and
// https://developer.apple.com/documentation/xcode_release_notes/xcode_10_release_notes
#if defined(__APPLE__) && defined(_LIBCPP_VERSION) && \
defined(__MAC_OS_X_VERSION_MIN_REQUIRED__) && \
__ENVIRONMENT_MAC_OS_X_VERSION_MIN_REQUIRED__ >= 101400
#define ABSL_INTERNAL_MACOS_HAS_CXX_17_TYPES 1
#else
#define ABSL_INTERNAL_MACOS_HAS_CXX_17_TYPES 0
#endif
// ABSL_HAVE_STD_ANY
//
// Checks whether C++17 std::any is available by checking whether <any> exists.
@ -364,7 +385,8 @@
#endif
#ifdef __has_include
#if __has_include(<any>) && __cplusplus >= 201703L
#if __has_include(<any>) && __cplusplus >= 201703L && \
ABSL_INTERNAL_MACOS_HAS_CXX_17_TYPES
#define ABSL_HAVE_STD_ANY 1
#endif
#endif
@ -377,7 +399,8 @@
#endif
#ifdef __has_include
#if __has_include(<optional>) && __cplusplus >= 201703L
#if __has_include(<optional>) && __cplusplus >= 201703L && \
ABSL_INTERNAL_MACOS_HAS_CXX_17_TYPES
#define ABSL_HAVE_STD_OPTIONAL 1
#endif
#endif
@ -390,7 +413,8 @@
#endif
#ifdef __has_include
#if __has_include(<variant>) && __cplusplus >= 201703L
#if __has_include(<variant>) && __cplusplus >= 201703L && \
ABSL_INTERNAL_MACOS_HAS_CXX_17_TYPES
#define ABSL_HAVE_STD_VARIANT 1
#endif
#endif
@ -414,14 +438,21 @@
// <string_view>, <variant> is implemented) or higher. Also, `__cplusplus` is
// not correctly set by MSVC, so we use `_MSVC_LANG` to check the language
// version.
// TODO(zhangxy): fix tests before enabling aliasing for `std::any`,
// `std::string_view`.
// TODO(zhangxy): fix tests before enabling aliasing for `std::any`.
#if defined(_MSC_VER) && _MSC_VER >= 1910 && \
((defined(_MSVC_LANG) && _MSVC_LANG > 201402) || __cplusplus > 201402)
// #define ABSL_HAVE_STD_ANY 1
#define ABSL_HAVE_STD_OPTIONAL 1
#define ABSL_HAVE_STD_VARIANT 1
// #define ABSL_HAVE_STD_STRING_VIEW 1
#define ABSL_HAVE_STD_STRING_VIEW 1
#endif
// In debug mode, MSVC 2017's std::variant throws a EXCEPTION_ACCESS_VIOLATION
// SEH exception from emplace for variant<SomeStruct> when constructing the
// struct can throw. This defeats some of variant_test and
// variant_exception_safety_test.
#if defined(_MSC_VER) && _MSC_VER >= 1700 && defined(_DEBUG)
#define ABSL_INTERNAL_MSVC_2017_DBG_MODE
#endif
#endif // ABSL_BASE_CONFIG_H_

@ -38,7 +38,7 @@ template <typename F>
void ExpectNoThrow(const F& f) {
try {
f();
} catch (TestException e) {
} catch (const TestException& e) {
ADD_FAILURE() << "Unexpected exception thrown from " << e.what();
}
}
@ -179,7 +179,7 @@ TEST(ThrowingValueTest, ThrowingStreamOps) {
}
// Tests the operator<< of ThrowingValue by forcing ConstructorTracker to emit
// a nonfatal failure that contains the std::string representation of the Thrower
// a nonfatal failure that contains the string representation of the Thrower
TEST(ThrowingValueTest, StreamOpsOutput) {
using ::testing::TypeSpec;
exceptions_internal::ConstructorTracker ct(exceptions_internal::countdown);
@ -548,21 +548,21 @@ TEST(ExceptionSafetyTesterTest, IncompleteTypesAreNotTestable) {
// Test that providing operation and inveriants still does not allow for the
// the invocation of .Test() and .Test(op) because it lacks a factory
auto without_fac =
testing::MakeExceptionSafetyTester().WithOperation(op).WithInvariants(
testing::MakeExceptionSafetyTester().WithOperation(op).WithContracts(
inv, testing::strong_guarantee);
EXPECT_FALSE(HasNullaryTest(without_fac));
EXPECT_FALSE(HasUnaryTest(without_fac));
// Test that providing invariants and factory allows the invocation of
// Test that providing contracts and factory allows the invocation of
// .Test(op) but does not allow for .Test() because it lacks an operation
auto without_op = testing::MakeExceptionSafetyTester()
.WithInvariants(inv, testing::strong_guarantee)
.WithContracts(inv, testing::strong_guarantee)
.WithFactory(fac);
EXPECT_FALSE(HasNullaryTest(without_op));
EXPECT_TRUE(HasUnaryTest(without_op));
// Test that providing operation and factory still does not allow for the
// the invocation of .Test() and .Test(op) because it lacks invariants
// the invocation of .Test() and .Test(op) because it lacks contracts
auto without_inv =
testing::MakeExceptionSafetyTester().WithOperation(op).WithFactory(fac);
EXPECT_FALSE(HasNullaryTest(without_inv));
@ -577,7 +577,7 @@ std::unique_ptr<ExampleStruct> ExampleFunctionFactory() {
void ExampleFunctionOperation(ExampleStruct*) {}
testing::AssertionResult ExampleFunctionInvariant(ExampleStruct*) {
testing::AssertionResult ExampleFunctionContract(ExampleStruct*) {
return testing::AssertionSuccess();
}
@ -593,16 +593,16 @@ struct {
struct {
testing::AssertionResult operator()(ExampleStruct* example_struct) const {
return ExampleFunctionInvariant(example_struct);
return ExampleFunctionContract(example_struct);
}
} example_struct_invariant;
} example_struct_contract;
auto example_lambda_factory = []() { return ExampleFunctionFactory(); };
auto example_lambda_operation = [](ExampleStruct*) {};
auto example_lambda_invariant = [](ExampleStruct* example_struct) {
return ExampleFunctionInvariant(example_struct);
auto example_lambda_contract = [](ExampleStruct* example_struct) {
return ExampleFunctionContract(example_struct);
};
// Testing that function references, pointers, structs with operator() and
@ -612,28 +612,28 @@ TEST(ExceptionSafetyTesterTest, MixedFunctionTypes) {
EXPECT_TRUE(testing::MakeExceptionSafetyTester()
.WithFactory(ExampleFunctionFactory)
.WithOperation(ExampleFunctionOperation)
.WithInvariants(ExampleFunctionInvariant)
.WithContracts(ExampleFunctionContract)
.Test());
// function pointer
EXPECT_TRUE(testing::MakeExceptionSafetyTester()
.WithFactory(&ExampleFunctionFactory)
.WithOperation(&ExampleFunctionOperation)
.WithInvariants(&ExampleFunctionInvariant)
.WithContracts(&ExampleFunctionContract)
.Test());
// struct
EXPECT_TRUE(testing::MakeExceptionSafetyTester()
.WithFactory(example_struct_factory)
.WithOperation(example_struct_operation)
.WithInvariants(example_struct_invariant)
.WithContracts(example_struct_contract)
.Test());
// lambda
EXPECT_TRUE(testing::MakeExceptionSafetyTester()
.WithFactory(example_lambda_factory)
.WithOperation(example_lambda_operation)
.WithInvariants(example_lambda_invariant)
.WithContracts(example_lambda_contract)
.Test());
}
@ -658,9 +658,9 @@ struct {
} invoker;
auto tester =
testing::MakeExceptionSafetyTester().WithOperation(invoker).WithInvariants(
testing::MakeExceptionSafetyTester().WithOperation(invoker).WithContracts(
CheckNonNegativeInvariants);
auto strong_tester = tester.WithInvariants(testing::strong_guarantee);
auto strong_tester = tester.WithContracts(testing::strong_guarantee);
struct FailsBasicGuarantee : public NonNegative {
void operator()() {
@ -690,7 +690,7 @@ TEST(ExceptionCheckTest, StrongGuaranteeFailure) {
EXPECT_FALSE(strong_tester.WithInitialValue(FollowsBasicGuarantee{}).Test());
}
struct BasicGuaranteeWithExtraInvariants : public NonNegative {
struct BasicGuaranteeWithExtraContracts : public NonNegative {
// After operator(), i is incremented. If operator() throws, i is set to 9999
void operator()() {
int old_i = i;
@ -701,21 +701,21 @@ struct BasicGuaranteeWithExtraInvariants : public NonNegative {
static constexpr int kExceptionSentinel = 9999;
};
constexpr int BasicGuaranteeWithExtraInvariants::kExceptionSentinel;
constexpr int BasicGuaranteeWithExtraContracts::kExceptionSentinel;
TEST(ExceptionCheckTest, BasicGuaranteeWithExtraInvariants) {
TEST(ExceptionCheckTest, BasicGuaranteeWithExtraContracts) {
auto tester_with_val =
tester.WithInitialValue(BasicGuaranteeWithExtraInvariants{});
tester.WithInitialValue(BasicGuaranteeWithExtraContracts{});
EXPECT_TRUE(tester_with_val.Test());
EXPECT_TRUE(
tester_with_val
.WithInvariants([](BasicGuaranteeWithExtraInvariants* o) {
if (o->i == BasicGuaranteeWithExtraInvariants::kExceptionSentinel) {
.WithContracts([](BasicGuaranteeWithExtraContracts* o) {
if (o->i == BasicGuaranteeWithExtraContracts::kExceptionSentinel) {
return testing::AssertionSuccess();
}
return testing::AssertionFailure()
<< "i should be "
<< BasicGuaranteeWithExtraInvariants::kExceptionSentinel
<< BasicGuaranteeWithExtraContracts::kExceptionSentinel
<< ", but is " << o->i;
})
.Test());
@ -740,7 +740,7 @@ struct HasReset : public NonNegative {
void reset() { i = 0; }
};
testing::AssertionResult CheckHasResetInvariants(HasReset* h) {
testing::AssertionResult CheckHasResetContracts(HasReset* h) {
h->reset();
return testing::AssertionResult(h->i == 0);
}
@ -759,17 +759,29 @@ TEST(ExceptionCheckTest, ModifyingChecker) {
};
EXPECT_FALSE(tester.WithInitialValue(FollowsBasicGuarantee{})
.WithInvariants(set_to_1000, is_1000)
.WithContracts(set_to_1000, is_1000)
.Test());
EXPECT_TRUE(strong_tester.WithInitialValue(FollowsStrongGuarantee{})
.WithInvariants(increment)
.WithContracts(increment)
.Test());
EXPECT_TRUE(testing::MakeExceptionSafetyTester()
.WithInitialValue(HasReset{})
.WithInvariants(CheckHasResetInvariants)
.WithContracts(CheckHasResetContracts)
.Test(invoker));
}
TEST(ExceptionSafetyTesterTest, ResetsCountdown) {
auto test =
testing::MakeExceptionSafetyTester()
.WithInitialValue(ThrowingValue<>())
.WithContracts([](ThrowingValue<>*) { return AssertionSuccess(); })
.WithOperation([](ThrowingValue<>*) {});
ASSERT_TRUE(test.Test());
// If the countdown isn't reset because there were no exceptions thrown, then
// this will fail with a termination from an unhandled exception
EXPECT_TRUE(test.Test());
}
struct NonCopyable : public NonNegative {
NonCopyable(const NonCopyable&) = delete;
NonCopyable() : NonNegative{0} {}
@ -799,7 +811,7 @@ TEST(ExceptionCheckTest, NonEqualityComparable) {
return testing::AssertionResult(nec->i == NonEqualityComparable().i);
};
auto strong_nec_tester = tester.WithInitialValue(NonEqualityComparable{})
.WithInvariants(nec_is_strong);
.WithContracts(nec_is_strong);
EXPECT_TRUE(strong_nec_tester.Test());
EXPECT_FALSE(strong_nec_tester.Test(
@ -833,14 +845,14 @@ struct {
testing::AssertionResult operator()(ExhaustivenessTester<T>*) const {
return testing::AssertionSuccess();
}
} CheckExhaustivenessTesterInvariants;
} CheckExhaustivenessTesterContracts;
template <typename T>
unsigned char ExhaustivenessTester<T>::successes = 0;
TEST(ExceptionCheckTest, Exhaustiveness) {
auto exhaust_tester = testing::MakeExceptionSafetyTester()
.WithInvariants(CheckExhaustivenessTesterInvariants)
.WithContracts(CheckExhaustivenessTesterContracts)
.WithOperation(invoker);
EXPECT_TRUE(
@ -849,7 +861,7 @@ TEST(ExceptionCheckTest, Exhaustiveness) {
EXPECT_TRUE(
exhaust_tester.WithInitialValue(ExhaustivenessTester<ThrowingValue<>>{})
.WithInvariants(testing::strong_guarantee)
.WithContracts(testing::strong_guarantee)
.Test());
EXPECT_EQ(ExhaustivenessTester<ThrowingValue<>>::successes, 0xF);
}
@ -931,8 +943,8 @@ TEST(ThrowingValueTraitsTest, RelationalOperators) {
}
TEST(ThrowingAllocatorTraitsTest, Assignablility) {
EXPECT_TRUE(std::is_move_assignable<ThrowingAllocator<int>>::value);
EXPECT_TRUE(std::is_copy_assignable<ThrowingAllocator<int>>::value);
EXPECT_TRUE(absl::is_move_assignable<ThrowingAllocator<int>>::value);
EXPECT_TRUE(absl::is_copy_assignable<ThrowingAllocator<int>>::value);
EXPECT_TRUE(std::is_nothrow_move_assignable<ThrowingAllocator<int>>::value);
EXPECT_TRUE(std::is_nothrow_copy_assignable<ThrowingAllocator<int>>::value);
}

@ -20,7 +20,7 @@
#include "gtest/gtest.h"
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace inline_variable_testing_internal {
namespace {
@ -60,5 +60,5 @@ TEST(InlineVariableTest, FunPtrType) {
} // namespace
} // namespace inline_variable_testing_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl

@ -15,7 +15,7 @@
#include "absl/base/internal/inline_variable_testing.h"
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace inline_variable_testing_internal {
const Foo& get_foo_a() { return inline_variable_foo; }
@ -23,5 +23,5 @@ const Foo& get_foo_a() { return inline_variable_foo; }
const int& get_int_a() { return inline_variable_int; }
} // namespace inline_variable_testing_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl

@ -15,7 +15,7 @@
#include "absl/base/internal/inline_variable_testing.h"
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace inline_variable_testing_internal {
const Foo& get_foo_b() { return inline_variable_foo; }
@ -23,5 +23,5 @@ const Foo& get_foo_b() { return inline_variable_foo; }
const int& get_int_b() { return inline_variable_int; }
} // namespace inline_variable_testing_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl

@ -28,7 +28,7 @@
#endif
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
template <typename T>
@ -161,7 +161,7 @@ class AtomicHook<ReturnType (*)(Args...)> {
#undef ABSL_HAVE_WORKING_ATOMIC_POINTER
} // namespace base_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_BASE_INTERNAL_ATOMIC_HOOK_H_

@ -0,0 +1,195 @@
// Copyright 2018 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#ifndef ABSL_BASE_INTERNAL_BITS_H_
#define ABSL_BASE_INTERNAL_BITS_H_
// This file contains bitwise ops which are implementation details of various
// absl libraries.
#include <cstdint>
// Clang on Windows has __builtin_clzll; otherwise we need to use the
// windows intrinsic functions.
#if defined(_MSC_VER)
#include <intrin.h>
#if defined(_M_X64)
#pragma intrinsic(_BitScanReverse64)
#pragma intrinsic(_BitScanForward64)
#endif
#pragma intrinsic(_BitScanReverse)
#pragma intrinsic(_BitScanForward)
#endif
#include "absl/base/attributes.h"
#if defined(_MSC_VER)
// We can achieve something similar to attribute((always_inline)) with MSVC by
// using the __forceinline keyword, however this is not perfect. MSVC is
// much less aggressive about inlining, and even with the __forceinline keyword.
#define ABSL_BASE_INTERNAL_FORCEINLINE __forceinline
#else
// Use default attribute inline.
#define ABSL_BASE_INTERNAL_FORCEINLINE inline ABSL_ATTRIBUTE_ALWAYS_INLINE
#endif
namespace absl {
inline namespace lts_2018_12_18 {
namespace base_internal {
ABSL_BASE_INTERNAL_FORCEINLINE int CountLeadingZeros64Slow(uint64_t n) {
int zeroes = 60;
if (n >> 32) zeroes -= 32, n >>= 32;
if (n >> 16) zeroes -= 16, n >>= 16;
if (n >> 8) zeroes -= 8, n >>= 8;
if (n >> 4) zeroes -= 4, n >>= 4;
return "\4\3\2\2\1\1\1\1\0\0\0\0\0\0\0"[n] + zeroes;
}
ABSL_BASE_INTERNAL_FORCEINLINE int CountLeadingZeros64(uint64_t n) {
#if defined(_MSC_VER) && defined(_M_X64)
// MSVC does not have __buitin_clzll. Use _BitScanReverse64.
unsigned long result = 0; // NOLINT(runtime/int)
if (_BitScanReverse64(&result, n)) {
return 63 - result;
}
return 64;
#elif defined(_MSC_VER)
// MSVC does not have __buitin_clzll. Compose two calls to _BitScanReverse
unsigned long result = 0; // NOLINT(runtime/int)
if ((n >> 32) && _BitScanReverse(&result, n >> 32)) {
return 31 - result;
}
if (_BitScanReverse(&result, n)) {
return 63 - result;
}
return 64;
#elif defined(__GNUC__)
// Use __builtin_clzll, which uses the following instructions:
// x86: bsr
// ARM64: clz
// PPC: cntlzd
static_assert(sizeof(unsigned long long) == sizeof(n), // NOLINT(runtime/int)
"__builtin_clzll does not take 64-bit arg");
// Handle 0 as a special case because __builtin_clzll(0) is undefined.
if (n == 0) {
return 64;
}
return __builtin_clzll(n);
#else
return CountLeadingZeros64Slow(n);
#endif
}
ABSL_BASE_INTERNAL_FORCEINLINE int CountLeadingZeros32Slow(uint64_t n) {
int zeroes = 28;
if (n >> 16) zeroes -= 16, n >>= 16;
if (n >> 8) zeroes -= 8, n >>= 8;
if (n >> 4) zeroes -= 4, n >>= 4;
return "\4\3\2\2\1\1\1\1\0\0\0\0\0\0\0"[n] + zeroes;
}
ABSL_BASE_INTERNAL_FORCEINLINE int CountLeadingZeros32(uint32_t n) {
#if defined(_MSC_VER)
unsigned long result = 0; // NOLINT(runtime/int)
if (_BitScanReverse(&result, n)) {
return 31 - result;
}
return 32;
#elif defined(__GNUC__)
// Use __builtin_clz, which uses the following instructions:
// x86: bsr
// ARM64: clz
// PPC: cntlzd
static_assert(sizeof(int) == sizeof(n),
"__builtin_clz does not take 32-bit arg");
// Handle 0 as a special case because __builtin_clz(0) is undefined.
if (n == 0) {
return 32;
}
return __builtin_clz(n);
#else
return CountLeadingZeros32Slow(n);
#endif
}
ABSL_BASE_INTERNAL_FORCEINLINE int CountTrailingZerosNonZero64Slow(uint64_t n) {
int c = 63;
n &= ~n + 1;
if (n & 0x00000000FFFFFFFF) c -= 32;
if (n & 0x0000FFFF0000FFFF) c -= 16;
if (n & 0x00FF00FF00FF00FF) c -= 8;
if (n & 0x0F0F0F0F0F0F0F0F) c -= 4;
if (n & 0x3333333333333333) c -= 2;
if (n & 0x5555555555555555) c -= 1;
return c;
}
ABSL_BASE_INTERNAL_FORCEINLINE int CountTrailingZerosNonZero64(uint64_t n) {
#if defined(_MSC_VER) && defined(_M_X64)
unsigned long result = 0; // NOLINT(runtime/int)
_BitScanForward64(&result, n);
return result;
#elif defined(_MSC_VER)
unsigned long result = 0; // NOLINT(runtime/int)
if (static_cast<uint32_t>(n) == 0) {
_BitScanForward(&result, n >> 32);
return result + 32;
}
_BitScanForward(&result, n);
return result;
#elif defined(__GNUC__)
static_assert(sizeof(unsigned long long) == sizeof(n), // NOLINT(runtime/int)
"__builtin_ctzll does not take 64-bit arg");
return __builtin_ctzll(n);
#else
return CountTrailingZerosNonZero64Slow(n);
#endif
}
ABSL_BASE_INTERNAL_FORCEINLINE int CountTrailingZerosNonZero32Slow(uint32_t n) {
int c = 31;
n &= ~n + 1;
if (n & 0x0000FFFF) c -= 16;
if (n & 0x00FF00FF) c -= 8;
if (n & 0x0F0F0F0F) c -= 4;
if (n & 0x33333333) c -= 2;
if (n & 0x55555555) c -= 1;
return c;
}
ABSL_BASE_INTERNAL_FORCEINLINE int CountTrailingZerosNonZero32(uint32_t n) {
#if defined(_MSC_VER)
unsigned long result = 0; // NOLINT(runtime/int)
_BitScanForward(&result, n);
return result;
#elif defined(__GNUC__)
static_assert(sizeof(int) == sizeof(n),
"__builtin_ctz does not take 32-bit arg");
return __builtin_ctz(n);
#else
return CountTrailingZerosNonZero32Slow(n);
#endif
}
#undef ABSL_BASE_INTERNAL_FORCEINLINE
} // namespace base_internal
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_BASE_INTERNAL_BITS_H_

@ -0,0 +1,97 @@
// Copyright 2018 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "absl/base/internal/bits.h"
#include "gtest/gtest.h"
namespace {
int CLZ64(uint64_t n) {
int fast = absl::base_internal::CountLeadingZeros64(n);
int slow = absl::base_internal::CountLeadingZeros64Slow(n);
EXPECT_EQ(fast, slow) << n;
return fast;
}
TEST(BitsTest, CountLeadingZeros64) {
EXPECT_EQ(64, CLZ64(uint64_t{}));
EXPECT_EQ(0, CLZ64(~uint64_t{}));
for (int index = 0; index < 64; index++) {
uint64_t x = static_cast<uint64_t>(1) << index;
const auto cnt = 63 - index;
ASSERT_EQ(cnt, CLZ64(x)) << index;
ASSERT_EQ(cnt, CLZ64(x + x - 1)) << index;
}
}
int CLZ32(uint32_t n) {
int fast = absl::base_internal::CountLeadingZeros32(n);
int slow = absl::base_internal::CountLeadingZeros32Slow(n);
EXPECT_EQ(fast, slow) << n;
return fast;
}
TEST(BitsTest, CountLeadingZeros32) {
EXPECT_EQ(32, CLZ32(uint32_t{}));
EXPECT_EQ(0, CLZ32(~uint32_t{}));
for (int index = 0; index < 32; index++) {
uint32_t x = static_cast<uint32_t>(1) << index;
const auto cnt = 31 - index;
ASSERT_EQ(cnt, CLZ32(x)) << index;
ASSERT_EQ(cnt, CLZ32(x + x - 1)) << index;
ASSERT_EQ(CLZ64(x), CLZ32(x) + 32);
}
}
int CTZ64(uint64_t n) {
int fast = absl::base_internal::CountTrailingZerosNonZero64(n);
int slow = absl::base_internal::CountTrailingZerosNonZero64Slow(n);
EXPECT_EQ(fast, slow) << n;
return fast;
}
TEST(BitsTest, CountTrailingZerosNonZero64) {
EXPECT_EQ(0, CTZ64(~uint64_t{}));
for (int index = 0; index < 64; index++) {
uint64_t x = static_cast<uint64_t>(1) << index;
const auto cnt = index;
ASSERT_EQ(cnt, CTZ64(x)) << index;
ASSERT_EQ(cnt, CTZ64(~(x - 1))) << index;
}
}
int CTZ32(uint32_t n) {
int fast = absl::base_internal::CountTrailingZerosNonZero32(n);
int slow = absl::base_internal::CountTrailingZerosNonZero32Slow(n);
EXPECT_EQ(fast, slow) << n;
return fast;
}
TEST(BitsTest, CountTrailingZerosNonZero32) {
EXPECT_EQ(0, CTZ32(~uint32_t{}));
for (int index = 0; index < 32; index++) {
uint32_t x = static_cast<uint32_t>(1) << index;
const auto cnt = index;
ASSERT_EQ(cnt, CTZ32(x)) << index;
ASSERT_EQ(cnt, CTZ32(~(x - 1))) << index;
}
}
} // namespace

@ -27,7 +27,7 @@
#include "absl/base/internal/unscaledcycleclock.h"
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
#if ABSL_USE_UNSCALED_CYCLECLOCK
@ -79,5 +79,5 @@ double CycleClock::Frequency() {
#endif
} // namespace base_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl

@ -46,7 +46,7 @@
#include <cstdint>
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
// -----------------------------------------------------------------------------
@ -73,7 +73,7 @@ class CycleClock {
};
} // namespace base_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_BASE_INTERNAL_CYCLECLOCK_H_

@ -62,7 +62,7 @@ extern "C" void* __mmap2(void*, size_t, int, int, int, size_t);
#endif // __BIONIC__
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
// Platform specific logic extracted from
@ -76,7 +76,11 @@ inline void* DirectMmap(void* start, size_t length, int prot, int flags, int fd,
// On these architectures, implement mmap with mmap2.
static int pagesize = 0;
if (pagesize == 0) {
#if defined(__wasm__) || defined(__asmjs__)
pagesize = getpagesize();
#else
pagesize = sysconf(_SC_PAGESIZE);
#endif
}
if (offset < 0 || offset % pagesize != 0) {
errno = EINVAL;
@ -93,11 +97,13 @@ inline void* DirectMmap(void* start, size_t length, int prot, int flags, int fd,
#endif
#elif defined(__s390x__)
// On s390x, mmap() arguments are passed in memory.
uint32_t buf[6] = {
reinterpret_cast<uint32_t>(start), static_cast<uint32_t>(length),
static_cast<uint32_t>(prot), static_cast<uint32_t>(flags),
static_cast<uint32_t>(fd), static_cast<uint32_t>(offset)};
return reintrepret_cast<void*>(syscall(SYS_mmap, buf));
unsigned long buf[6] = {reinterpret_cast<unsigned long>(start), // NOLINT
static_cast<unsigned long>(length), // NOLINT
static_cast<unsigned long>(prot), // NOLINT
static_cast<unsigned long>(flags), // NOLINT
static_cast<unsigned long>(fd), // NOLINT
static_cast<unsigned long>(offset)}; // NOLINT
return reinterpret_cast<void*>(syscall(SYS_mmap, buf));
#elif defined(__x86_64__)
// The x32 ABI has 32 bit longs, but the syscall interface is 64 bit.
// We need to explicitly cast to an unsigned 64 bit type to avoid implicit
@ -123,7 +129,7 @@ inline int DirectMunmap(void* start, size_t length) {
}
} // namespace base_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl
#else // !__linux__
@ -132,7 +138,7 @@ inline int DirectMunmap(void* start, size_t length) {
// actual mmap()/munmap() methods.
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
inline void* DirectMmap(void* start, size_t length, int prot, int flags, int fd,
@ -145,7 +151,7 @@ inline int DirectMunmap(void* start, size_t length) {
}
} // namespace base_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // __linux__

@ -34,7 +34,7 @@
#include "absl/base/port.h"
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
// Use compiler byte-swapping intrinsics if they are available. 32-bit
// and 64-bit versions are available in Clang and GCC as of GCC 4.3.0.
@ -83,14 +83,14 @@ inline uint64_t gbswap_64(uint64_t host_int) {
#elif defined(__GLIBC__)
return bswap_64(host_int);
#else
return (((x & uint64_t{(0xFF}) << 56) |
((x & uint64_t{(0xFF00}) << 40) |
((x & uint64_t{(0xFF0000}) << 24) |
((x & uint64_t{(0xFF000000}) << 8) |
((x & uint64_t{(0xFF00000000}) >> 8) |
((x & uint64_t{(0xFF0000000000}) >> 24) |
((x & uint64_t{(0xFF000000000000}) >> 40) |
((x & uint64_t{(0xFF00000000000000}) >> 56));
return (((host_int & uint64_t{0xFF}) << 56) |
((host_int & uint64_t{0xFF00}) << 40) |
((host_int & uint64_t{0xFF0000}) << 24) |
((host_int & uint64_t{0xFF000000}) << 8) |
((host_int & uint64_t{0xFF00000000}) >> 8) |
((host_int & uint64_t{0xFF0000000000}) >> 24) |
((host_int & uint64_t{0xFF000000000000}) >> 40) |
((host_int & uint64_t{0xFF00000000000000}) >> 56));
#endif // bswap_64
}
@ -98,8 +98,10 @@ inline uint32_t gbswap_32(uint32_t host_int) {
#if defined(__GLIBC__)
return bswap_32(host_int);
#else
return (((x & 0xFF) << 24) | ((x & 0xFF00) << 8) | ((x & 0xFF0000) >> 8) |
((x & 0xFF000000) >> 24));
return (((host_int & uint32_t{0xFF}) << 24) |
((host_int & uint32_t{0xFF00}) << 8) |
((host_int & uint32_t{0xFF0000}) >> 8) |
((host_int & uint32_t{0xFF000000}) >> 24));
#endif
}
@ -107,7 +109,8 @@ inline uint16_t gbswap_16(uint16_t host_int) {
#if defined(__GLIBC__)
return bswap_16(host_int);
#else
return uint16_t{((x & 0xFF) << 8) | ((x & 0xFF00) >> 8)};
return (((host_int & uint16_t{0xFF}) << 8) |
((host_int & uint16_t{0xFF00}) >> 8));
#endif
}
@ -265,7 +268,7 @@ inline void Store64(void *p, uint64_t v) {
} // namespace big_endian
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_BASE_INTERNAL_ENDIAN_H_

@ -24,7 +24,7 @@
#include "absl/base/config.h"
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace {
const uint64_t kInitialNumber{0x0123456789abcdef};
@ -34,32 +34,16 @@ const uint16_t k16Value{0x0123};
const int kNumValuesToTest = 1000000;
const int kRandomSeed = 12345;
#ifdef ABSL_IS_BIG_ENDIAN
#if defined(ABSL_IS_BIG_ENDIAN)
const uint64_t kInitialInNetworkOrder{kInitialNumber};
const uint64_t k64ValueLE{0xefcdab8967452301};
const uint32_t k32ValueLE{0x67452301};
const uint16_t k16ValueLE{0x2301};
const uint8_t k8ValueLE{k8Value};
const uint64_t k64IValueLE{0xefcdab89674523a1};
const uint32_t k32IValueLE{0x67452391};
const uint16_t k16IValueLE{0x85ff};
const uint8_t k8IValueLE{0xff};
const uint64_t kDoubleValueLE{0x6e861bf0f9210940};
const uint32_t kFloatValueLE{0xd00f4940};
const uint8_t kBoolValueLE{0x1};
const uint64_t k64ValueBE{kInitialNumber};
const uint32_t k32ValueBE{k32Value};
const uint16_t k16ValueBE{k16Value};
const uint8_t k8ValueBE{k8Value};
const uint64_t k64IValueBE{0xa123456789abcdef};
const uint32_t k32IValueBE{0x91234567};
const uint16_t k16IValueBE{0xff85};
const uint8_t k8IValueBE{0xff};
const uint64_t kDoubleValueBE{0x400921f9f01b866e};
const uint32_t kFloatValueBE{0x40490fd0};
const uint8_t kBoolValueBE{0x1};
#elif defined ABSL_IS_LITTLE_ENDIAN
#elif defined(ABSL_IS_LITTLE_ENDIAN)
const uint64_t kInitialInNetworkOrder{0xefcdab8967452301};
const uint64_t k64ValueLE{kInitialNumber};
const uint32_t k32ValueLE{k32Value};
@ -277,5 +261,5 @@ TEST(EndianessTest, big_endian) {
}
} // namespace
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl

@ -23,6 +23,10 @@ exceptions_internal::NoThrowTag nothrow_ctor;
exceptions_internal::StrongGuaranteeTagType strong_guarantee;
exceptions_internal::ExceptionSafetyTestBuilder<> MakeExceptionSafetyTester() {
return {};
}
namespace exceptions_internal {
int countdown = -1;

@ -33,7 +33,7 @@
#include "absl/meta/type_traits.h"
#include "absl/strings/string_view.h"
#include "absl/strings/substitute.h"
#include "absl/types/optional.h"
#include "absl/utility/utility.h"
namespace testing {
@ -127,10 +127,8 @@ class ConstructorTracker {
void* address = it.first;
TrackedAddress& tracked_address = it.second;
if (tracked_address.is_alive) {
ADD_FAILURE() << "Object at address " << address
<< " with countdown of " << countdown_
<< " was not destroyed [" << tracked_address.description
<< "]";
ADD_FAILURE() << ErrorMessage(address, tracked_address.description,
countdown_, "Object was not destroyed.");
}
}
}
@ -141,11 +139,11 @@ class ConstructorTracker {
TrackedAddress& tracked_address =
current_tracker_instance_->address_map_[address];
if (tracked_address.is_alive) {
ADD_FAILURE() << "Object at address " << address << " with countdown of "
<< current_tracker_instance_->countdown_
<< " was re-constructed. Previously: ["
<< tracked_address.description << "] Now: [" << description
<< "]";
ADD_FAILURE() << ErrorMessage(
address, tracked_address.description,
current_tracker_instance_->countdown_,
"Object was re-constructed. Current object was constructed by " +
description);
}
tracked_address = {true, std::move(description)};
}
@ -159,10 +157,9 @@ class ConstructorTracker {
TrackedAddress& tracked_address = it->second;
if (!tracked_address.is_alive) {
ADD_FAILURE() << "Object at address " << address << " with countdown of "
<< current_tracker_instance_->countdown_
<< " was re-destroyed or created prior to construction "
<< "tracking [" << tracked_address.description << "]";
ADD_FAILURE() << ErrorMessage(address, tracked_address.description,
current_tracker_instance_->countdown_,
"Object was re-destroyed.");
}
tracked_address.is_alive = false;
}
@ -172,6 +169,16 @@ class ConstructorTracker {
return current_tracker_instance_ != nullptr;
}
static std::string ErrorMessage(void* address, const std::string& address_description,
int countdown, const std::string& error_description) {
return absl::Substitute(
"With coundtown at $0:\n"
" $1\n"
" Object originally constructed by $2\n"
" Object address: $3\n",
countdown, error_description, address_description, address);
}
std::unordered_map<void*, TrackedAddress> address_map_;
int countdown_;
@ -190,70 +197,6 @@ class TrackedObject {
~TrackedObject() noexcept { ConstructorTracker::ObjectDestructed(this); }
};
template <typename Factory, typename Operation, typename Invariant>
absl::optional<testing::AssertionResult> TestSingleInvariantAtCountdownImpl(
const Factory& factory, const Operation& operation, int count,
const Invariant& invariant) {
auto t_ptr = factory();
absl::optional<testing::AssertionResult> current_res;
SetCountdown(count);
try {
operation(t_ptr.get());
} catch (const exceptions_internal::TestException& e) {
current_res.emplace(invariant(t_ptr.get()));
if (!current_res.value()) {
*current_res << e.what() << " failed invariant check";
}
}
UnsetCountdown();
return current_res;
}
template <typename Factory, typename Operation>
absl::optional<testing::AssertionResult> TestSingleInvariantAtCountdownImpl(
const Factory& factory, const Operation& operation, int count,
StrongGuaranteeTagType) {
using TPtr = typename decltype(factory())::pointer;
auto t_is_strong = [&](TPtr t) { return *t == *factory(); };
return TestSingleInvariantAtCountdownImpl(factory, operation, count,
t_is_strong);
}
template <typename Factory, typename Operation, typename Invariant>
int TestSingleInvariantAtCountdown(
const Factory& factory, const Operation& operation, int count,
const Invariant& invariant,
absl::optional<testing::AssertionResult>* reduced_res) {
// If reduced_res is empty, it means the current call to
// TestSingleInvariantAtCountdown(...) is the first test being run so we do
// want to run it. Alternatively, if it's not empty (meaning a previous test
// has run) we want to check if it passed. If the previous test did pass, we
// want to contine running tests so we do want to run the current one. If it
// failed, we want to short circuit so as not to overwrite the AssertionResult
// output. If that's the case, we do not run the current test and instead we
// simply return.
if (!reduced_res->has_value() || reduced_res->value()) {
*reduced_res = TestSingleInvariantAtCountdownImpl(factory, operation, count,
invariant);
}
return 0;
}
template <typename Factory, typename Operation, typename... Invariants>
inline absl::optional<testing::AssertionResult> TestAllInvariantsAtCountdown(
const Factory& factory, const Operation& operation, int count,
const Invariants&... invariants) {
absl::optional<testing::AssertionResult> reduced_res;
// Run each checker, short circuiting after the first failure
int dummy[] = {
0, (TestSingleInvariantAtCountdown(factory, operation, count, invariants,
&reduced_res))...};
static_cast<void>(dummy);
return reduced_res;
}
} // namespace exceptions_internal
extern exceptions_internal::NoThrowTag nothrow_ctor;
@ -773,7 +716,7 @@ class ThrowingAllocator : private exceptions_internal::TrackedObject {
}
size_type max_size() const noexcept {
return std::numeric_limits<difference_type>::max() / sizeof(value_type);
return (std::numeric_limits<difference_type>::max)() / sizeof(value_type);
}
ThrowingAllocator select_on_container_copy_construction() noexcept(
@ -858,7 +801,7 @@ testing::AssertionResult TestNothrowOp(const Operation& operation) {
try {
operation();
return testing::AssertionSuccess();
} catch (exceptions_internal::TestException) {
} catch (const exceptions_internal::TestException&) {
return testing::AssertionFailure()
<< "TestException thrown during call to operation() when nothrow "
"guarantee was expected.";
@ -871,7 +814,7 @@ testing::AssertionResult TestNothrowOp(const Operation& operation) {
namespace exceptions_internal {
// Dummy struct for ExceptionSafetyTester<> partial state.
// Dummy struct for ExceptionSafetyTestBuilder<> partial state.
struct UninitializedT {};
template <typename T>
@ -884,29 +827,106 @@ class DefaultFactory {
T t_;
};
template <size_t LazyInvariantsCount, typename LazyFactory,
template <size_t LazyContractsCount, typename LazyFactory,
typename LazyOperation>
using EnableIfTestable = typename absl::enable_if_t<
LazyInvariantsCount != 0 &&
LazyContractsCount != 0 &&
!std::is_same<LazyFactory, UninitializedT>::value &&
!std::is_same<LazyOperation, UninitializedT>::value>;
template <typename Factory = UninitializedT,
typename Operation = UninitializedT, typename... Invariants>
class ExceptionSafetyTester;
typename Operation = UninitializedT, typename... Contracts>
class ExceptionSafetyTestBuilder;
} // namespace exceptions_internal
exceptions_internal::ExceptionSafetyTester<> MakeExceptionSafetyTester();
/*
* Constructs an empty ExceptionSafetyTestBuilder. All
* ExceptionSafetyTestBuilder objects are immutable and all With[thing] mutation
* methods return new instances of ExceptionSafetyTestBuilder.
*
* In order to test a T for exception safety, a factory for that T, a testable
* operation, and at least one contract callback returning an assertion
* result must be applied using the respective methods.
*/
exceptions_internal::ExceptionSafetyTestBuilder<> MakeExceptionSafetyTester();
namespace exceptions_internal {
template <typename T>
struct IsUniquePtr : std::false_type {};
template <typename T, typename D>
struct IsUniquePtr<std::unique_ptr<T, D>> : std::true_type {};
template <typename Factory>
struct FactoryPtrTypeHelper {
using type = decltype(std::declval<const Factory&>()());
static_assert(IsUniquePtr<type>::value, "Factories must return a unique_ptr");
};
template <typename Factory>
using FactoryPtrType = typename FactoryPtrTypeHelper<Factory>::type;
template <typename Factory>
using FactoryElementType = typename FactoryPtrType<Factory>::element_type;
template <typename T>
class ExceptionSafetyTest {
using Factory = std::function<std::unique_ptr<T>()>;
using Operation = std::function<void(T*)>;
using Contract = std::function<AssertionResult(T*)>;
public:
template <typename... Contracts>
explicit ExceptionSafetyTest(const Factory& f, const Operation& op,
const Contracts&... contracts)
: factory_(f), operation_(op), contracts_{WrapContract(contracts)...} {}
AssertionResult Test() const {
for (int count = 0;; ++count) {
exceptions_internal::ConstructorTracker ct(count);
for (const auto& contract : contracts_) {
auto t_ptr = factory_();
try {
SetCountdown(count);
operation_(t_ptr.get());
// Unset for the case that the operation throws no exceptions, which
// would leave the countdown set and break the *next* exception safety
// test after this one.
UnsetCountdown();
return AssertionSuccess();
} catch (const exceptions_internal::TestException& e) {
if (!contract(t_ptr.get())) {
return AssertionFailure() << e.what() << " failed contract check";
}
}
}
}
}
private:
template <typename ContractFn>
Contract WrapContract(const ContractFn& contract) {
return [contract](T* t_ptr) { return AssertionResult(contract(t_ptr)); };
}
Contract WrapContract(StrongGuaranteeTagType) {
return [this](T* t_ptr) { return AssertionResult(*factory_() == *t_ptr); };
}
Factory factory_;
Operation operation_;
std::vector<Contract> contracts_;
};
/*
* Builds a tester object that tests if performing a operation on a T follows
* exception safety guarantees. Verification is done via invariant assertion
* exception safety guarantees. Verification is done via contract assertion
* callbacks applied to T instances post-throw.
*
* Template parameters for ExceptionSafetyTester:
* Template parameters for ExceptionSafetyTestBuilder:
*
* - Factory: The factory object (passed in via tester.WithFactory(...) or
* tester.WithInitialValue(...)) must be invocable with the signature
@ -921,25 +941,25 @@ namespace exceptions_internal {
* fresh T instance so it's free to modify and destroy the T instances as it
* pleases.
*
* - Invariants...: The invariant assertion callback objects (passed in via
* tester.WithInvariants(...)) must be invocable with the signature
* - Contracts...: The contract assertion callback objects (passed in via
* tester.WithContracts(...)) must be invocable with the signature
* `testing::AssertionResult operator()(T*) const` where T is the type being
* tested. Invariant assertion callbacks are provided T instances post-throw.
* They must return testing::AssertionSuccess when the type invariants of the
* provided T instance hold. If the type invariants of the T instance do not
* tested. Contract assertion callbacks are provided T instances post-throw.
* They must return testing::AssertionSuccess when the type contracts of the
* provided T instance hold. If the type contracts of the T instance do not
* hold, they must return testing::AssertionFailure. Execution order of
* Invariants... is unspecified. They will each individually get a fresh T
* Contracts... is unspecified. They will each individually get a fresh T
* instance so they are free to modify and destroy the T instances as they
* please.
*/
template <typename Factory, typename Operation, typename... Invariants>
class ExceptionSafetyTester {
template <typename Factory, typename Operation, typename... Contracts>
class ExceptionSafetyTestBuilder {
public:
/*
* Returns a new ExceptionSafetyTester with an included T factory based on the
* provided T instance. The existing factory will not be included in the newly
* created tester instance. The created factory returns a new T instance by
* copy-constructing the provided const T& t.
* Returns a new ExceptionSafetyTestBuilder with an included T factory based
* on the provided T instance. The existing factory will not be included in
* the newly created tester instance. The created factory returns a new T
* instance by copy-constructing the provided const T& t.
*
* Preconditions for tester.WithInitialValue(const T& t):
*
@ -948,63 +968,63 @@ class ExceptionSafetyTester {
* tester.WithFactory(...).
*/
template <typename T>
ExceptionSafetyTester<DefaultFactory<T>, Operation, Invariants...>
ExceptionSafetyTestBuilder<DefaultFactory<T>, Operation, Contracts...>
WithInitialValue(const T& t) const {
return WithFactory(DefaultFactory<T>(t));
}
/*
* Returns a new ExceptionSafetyTester with the provided T factory included.
* The existing factory will not be included in the newly-created tester
* instance. This method is intended for use with types lacking a copy
* Returns a new ExceptionSafetyTestBuilder with the provided T factory
* included. The existing factory will not be included in the newly-created
* tester instance. This method is intended for use with types lacking a copy
* constructor. Types that can be copy-constructed should instead use the
* method tester.WithInitialValue(...).
*/
template <typename NewFactory>
ExceptionSafetyTester<absl::decay_t<NewFactory>, Operation, Invariants...>
ExceptionSafetyTestBuilder<absl::decay_t<NewFactory>, Operation, Contracts...>
WithFactory(const NewFactory& new_factory) const {
return {new_factory, operation_, invariants_};
return {new_factory, operation_, contracts_};
}
/*
* Returns a new ExceptionSafetyTester with the provided testable operation
* included. The existing operation will not be included in the newly created
* tester.
* Returns a new ExceptionSafetyTestBuilder with the provided testable
* operation included. The existing operation will not be included in the
* newly created tester.
*/
template <typename NewOperation>
ExceptionSafetyTester<Factory, absl::decay_t<NewOperation>, Invariants...>
ExceptionSafetyTestBuilder<Factory, absl::decay_t<NewOperation>, Contracts...>
WithOperation(const NewOperation& new_operation) const {
return {factory_, new_operation, invariants_};
return {factory_, new_operation, contracts_};
}
/*
* Returns a new ExceptionSafetyTester with the provided MoreInvariants...
* combined with the Invariants... that were already included in the instance
* on which the method was called. Invariants... cannot be removed or replaced
* once added to an ExceptionSafetyTester instance. A fresh object must be
* created in order to get an empty Invariants... list.
* Returns a new ExceptionSafetyTestBuilder with the provided MoreContracts...
* combined with the Contracts... that were already included in the instance
* on which the method was called. Contracts... cannot be removed or replaced
* once added to an ExceptionSafetyTestBuilder instance. A fresh object must
* be created in order to get an empty Contracts... list.
*
* In addition to passing in custom invariant assertion callbacks, this method
* In addition to passing in custom contract assertion callbacks, this method
* accepts `testing::strong_guarantee` as an argument which checks T instances
* post-throw against freshly created T instances via operator== to verify
* that any state changes made during the execution of the operation were
* properly rolled back.
*/
template <typename... MoreInvariants>
ExceptionSafetyTester<Factory, Operation, Invariants...,
absl::decay_t<MoreInvariants>...>
WithInvariants(const MoreInvariants&... more_invariants) const {
return {factory_, operation_,
std::tuple_cat(invariants_,
std::tuple<absl::decay_t<MoreInvariants>...>(
more_invariants...))};
template <typename... MoreContracts>
ExceptionSafetyTestBuilder<Factory, Operation, Contracts...,
absl::decay_t<MoreContracts>...>
WithContracts(const MoreContracts&... more_contracts) const {
return {
factory_, operation_,
std::tuple_cat(contracts_, std::tuple<absl::decay_t<MoreContracts>...>(
more_contracts...))};
}
/*
* Returns a testing::AssertionResult that is the reduced result of the
* exception safety algorithm. The algorithm short circuits and returns
* AssertionFailure after the first invariant callback returns an
* AssertionFailure. Otherwise, if all invariant callbacks return an
* AssertionFailure after the first contract callback returns an
* AssertionFailure. Otherwise, if all contract callbacks return an
* AssertionSuccess, the reduced result is AssertionSuccess.
*
* The passed-in testable operation will not be saved in a new tester instance
@ -1013,97 +1033,62 @@ class ExceptionSafetyTester {
*
* Preconditions for tester.Test(const NewOperation& new_operation):
*
* - May only be called after at least one invariant assertion callback and a
* - May only be called after at least one contract assertion callback and a
* factory or initial value have been provided.
*/
template <
typename NewOperation,
typename = EnableIfTestable<sizeof...(Invariants), Factory, NewOperation>>
typename = EnableIfTestable<sizeof...(Contracts), Factory, NewOperation>>
testing::AssertionResult Test(const NewOperation& new_operation) const {
return TestImpl(new_operation, absl::index_sequence_for<Invariants...>());
return TestImpl(new_operation, absl::index_sequence_for<Contracts...>());
}
/*
* Returns a testing::AssertionResult that is the reduced result of the
* exception safety algorithm. The algorithm short circuits and returns
* AssertionFailure after the first invariant callback returns an
* AssertionFailure. Otherwise, if all invariant callbacks return an
* AssertionFailure after the first contract callback returns an
* AssertionFailure. Otherwise, if all contract callbacks return an
* AssertionSuccess, the reduced result is AssertionSuccess.
*
* Preconditions for tester.Test():
*
* - May only be called after at least one invariant assertion callback, a
* - May only be called after at least one contract assertion callback, a
* factory or initial value and a testable operation have been provided.
*/
template <typename LazyOperation = Operation,
typename =
EnableIfTestable<sizeof...(Invariants), Factory, LazyOperation>>
template <
typename LazyOperation = Operation,
typename = EnableIfTestable<sizeof...(Contracts), Factory, LazyOperation>>
testing::AssertionResult Test() const {
return TestImpl(operation_, absl::index_sequence_for<Invariants...>());
return Test(operation_);
}
private:
template <typename, typename, typename...>
friend class ExceptionSafetyTester;
friend class ExceptionSafetyTestBuilder;
friend ExceptionSafetyTester<> testing::MakeExceptionSafetyTester();
friend ExceptionSafetyTestBuilder<> testing::MakeExceptionSafetyTester();
ExceptionSafetyTester() {}
ExceptionSafetyTestBuilder() {}
ExceptionSafetyTester(const Factory& f, const Operation& o,
const std::tuple<Invariants...>& i)
: factory_(f), operation_(o), invariants_(i) {}
ExceptionSafetyTestBuilder(const Factory& f, const Operation& o,
const std::tuple<Contracts...>& i)
: factory_(f), operation_(o), contracts_(i) {}
template <typename SelectedOperation, size_t... Indices>
testing::AssertionResult TestImpl(const SelectedOperation& selected_operation,
testing::AssertionResult TestImpl(SelectedOperation selected_operation,
absl::index_sequence<Indices...>) const {
// Starting from 0 and counting upwards until one of the exit conditions is
// hit...
for (int count = 0;; ++count) {
exceptions_internal::ConstructorTracker ct(count);
// Run the full exception safety test algorithm for the current countdown
auto reduced_res =
TestAllInvariantsAtCountdown(factory_, selected_operation, count,
std::get<Indices>(invariants_)...);
// If there is no value in the optional, no invariants were run because no
// exception was thrown. This means that the test is complete and the loop
// can exit successfully.
if (!reduced_res.has_value()) {
return testing::AssertionSuccess();
}
// If the optional is not empty and the value is falsy, an invariant check
// failed so the test must exit to propegate the failure.
if (!reduced_res.value()) {
return reduced_res.value();
}
// If the optional is not empty and the value is not falsy, it means
// exceptions were thrown but the invariants passed so the test must
// continue to run.
}
return ExceptionSafetyTest<FactoryElementType<Factory>>(
factory_, selected_operation, std::get<Indices>(contracts_)...)
.Test();
}
Factory factory_;
Operation operation_;
std::tuple<Invariants...> invariants_;
std::tuple<Contracts...> contracts_;
};
} // namespace exceptions_internal
/*
* Constructs an empty ExceptionSafetyTester. All ExceptionSafetyTester
* objects are immutable and all With[thing] mutation methods return new
* instances of ExceptionSafetyTester.
*
* In order to test a T for exception safety, a factory for that T, a testable
* operation, and at least one invariant callback returning an assertion
* result must be applied using the respective methods.
*/
inline exceptions_internal::ExceptionSafetyTester<>
MakeExceptionSafetyTester() {
return {};
}
} // namespace testing
#endif // ABSL_BASE_INTERNAL_EXCEPTION_SAFETY_TESTING_H_

@ -35,7 +35,7 @@
EXPECT_DEATH(expr, ".*")
#else
#define ABSL_BASE_INTERNAL_EXPECT_FAIL(expr, exception_t, text) \
EXPECT_DEATH(expr, text)
EXPECT_DEATH_IF_SUPPORTED(expr, text)
#endif

@ -18,7 +18,7 @@
#include <cstdint>
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
// Arbitrary value with high bits set. Xor'ing with it is unlikely
@ -43,7 +43,7 @@ inline T* UnhidePtr(uintptr_t hidden) {
}
} // namespace base_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_BASE_INTERNAL_HIDE_PTR_H_

@ -17,7 +17,7 @@
#define ABSL_BASE_INTERNAL_IDENTITY_H_
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace internal {
template <typename T>
@ -29,7 +29,7 @@ template <typename T>
using identity_t = typename identity<T>::type;
} // namespace internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_BASE_INTERNAL_IDENTITY_H_

@ -18,7 +18,7 @@
#include "absl/base/internal/inline_variable.h"
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace inline_variable_testing_internal {
struct Foo {
@ -40,7 +40,7 @@ const int& get_int_a();
const int& get_int_b();
} // namespace inline_variable_testing_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_BASE_INLINE_VARIABLE_TESTING_H_

@ -43,7 +43,7 @@
// top of this file for the API documentation.
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
// The five classes below each implement one of the clauses from the definition
@ -184,7 +184,7 @@ InvokeT<F, Args...> Invoke(F&& f, Args&&... args) {
std::forward<Args>(args)...);
}
} // namespace base_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_BASE_INTERNAL_INVOKE_H_

@ -63,7 +63,7 @@
#endif // __APPLE__
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
// A first-fit allocator with amortized logarithmic free() time.
@ -209,7 +209,7 @@ struct LowLevelAlloc::Arena {
int32_t allocation_count GUARDED_BY(mu);
// flags passed to NewArena
const uint32_t flags;
// Result of getpagesize()
// Result of sysconf(_SC_PAGESIZE)
const size_t pagesize;
// Lowest power of two >= max(16, sizeof(AllocList))
const size_t roundup;
@ -325,8 +325,10 @@ size_t GetPageSize() {
SYSTEM_INFO system_info;
GetSystemInfo(&system_info);
return std::max(system_info.dwPageSize, system_info.dwAllocationGranularity);
#else
#elif defined(__wasm__) || defined(__asmjs__)
return getpagesize();
#else
return sysconf(_SC_PAGESIZE);
#endif
}
@ -402,16 +404,20 @@ bool LowLevelAlloc::DeleteArena(Arena *arena) {
ABSL_RAW_CHECK(munmap_result != 0,
"LowLevelAlloc::DeleteArena: VitualFree failed");
#else
#ifndef ABSL_LOW_LEVEL_ALLOC_ASYNC_SIGNAL_SAFE_MISSING
if ((arena->flags & LowLevelAlloc::kAsyncSignalSafe) == 0) {
munmap_result = munmap(region, size);
} else {
munmap_result = base_internal::DirectMunmap(region, size);
}
#else
munmap_result = munmap(region, size);
#endif // ABSL_LOW_LEVEL_ALLOC_ASYNC_SIGNAL_SAFE_MISSING
if (munmap_result != 0) {
ABSL_RAW_LOG(FATAL, "LowLevelAlloc::DeleteArena: munmap failed: %d",
errno);
}
#endif
#endif // _WIN32
}
section.Leave();
arena->~Arena();
@ -546,6 +552,7 @@ static void *DoAllocWithArena(size_t request, LowLevelAlloc::Arena *arena) {
MEM_RESERVE | MEM_COMMIT, PAGE_READWRITE);
ABSL_RAW_CHECK(new_pages != nullptr, "VirtualAlloc failed");
#else
#ifndef ABSL_LOW_LEVEL_ALLOC_ASYNC_SIGNAL_SAFE_MISSING
if ((arena->flags & LowLevelAlloc::kAsyncSignalSafe) != 0) {
new_pages = base_internal::DirectMmap(nullptr, new_pages_size,
PROT_WRITE|PROT_READ, MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
@ -553,10 +560,15 @@ static void *DoAllocWithArena(size_t request, LowLevelAlloc::Arena *arena) {
new_pages = mmap(nullptr, new_pages_size, PROT_WRITE | PROT_READ,
MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
}
#else
new_pages = mmap(nullptr, new_pages_size, PROT_WRITE | PROT_READ,
MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
#endif // ABSL_LOW_LEVEL_ALLOC_ASYNC_SIGNAL_SAFE_MISSING
if (new_pages == MAP_FAILED) {
ABSL_RAW_LOG(FATAL, "mmap error: %d", errno);
}
#endif
#endif // _WIN32
arena->mu.Lock();
s = reinterpret_cast<AllocList *>(new_pages);
s->header.size = new_pages_size;
@ -600,7 +612,7 @@ void *LowLevelAlloc::AllocWithArena(size_t request, Arena *arena) {
}
} // namespace base_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_LOW_LEVEL_ALLOC_MISSING

@ -39,10 +39,13 @@
#define ABSL_LOW_LEVEL_ALLOC_MISSING 1
#endif
// Using LowLevelAlloc with kAsyncSignalSafe isn't supported on Windows.
// Using LowLevelAlloc with kAsyncSignalSafe isn't supported on Windows or
// asm.js / WebAssembly.
// See https://kripken.github.io/emscripten-site/docs/porting/pthreads.html
// for more information.
#ifdef ABSL_LOW_LEVEL_ALLOC_ASYNC_SIGNAL_SAFE_MISSING
#error ABSL_LOW_LEVEL_ALLOC_ASYNC_SIGNAL_SAFE_MISSING cannot be directly set
#elif defined(_WIN32)
#elif defined(_WIN32) || defined(__asmjs__) || defined(__wasm__)
#define ABSL_LOW_LEVEL_ALLOC_ASYNC_SIGNAL_SAFE_MISSING 1
#endif
@ -51,7 +54,7 @@
#include "absl/base/port.h"
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
class LowLevelAlloc {
@ -116,6 +119,6 @@ class LowLevelAlloc {
};
} // namespace base_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_BASE_INTERNAL_LOW_LEVEL_ALLOC_H_

@ -22,7 +22,7 @@
#include <utility>
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
namespace {
@ -149,7 +149,7 @@ static struct BeforeMain {
} // namespace
} // namespace base_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl
int main(int argc, char *argv[]) {

@ -28,7 +28,7 @@ extern "C" bool __google_disable_rescheduling(void);
extern "C" void __google_enable_rescheduling(bool disable_result);
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
class SchedulingHelper; // To allow use of SchedulingGuard.
@ -101,6 +101,6 @@ inline void SchedulingGuard::EnableRescheduling(bool /* disable_result */) {
} // namespace base_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_BASE_INTERNAL_LOW_LEVEL_SCHEDULING_H_

@ -139,7 +139,7 @@ void RawLogVA(absl::LogSeverity severity, const char* file, int line,
#endif
#ifdef ABSL_MIN_LOG_LEVEL
if (static_cast<int>(severity) < ABSL_MIN_LOG_LEVEL &&
if (severity < static_cast<absl::LogSeverity>(ABSL_MIN_LOG_LEVEL) &&
severity < absl::LogSeverity::kFatal) {
enabled = false;
}
@ -181,7 +181,7 @@ void RawLogVA(absl::LogSeverity severity, const char* file, int line,
} // namespace
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace raw_logging_internal {
void SafeWriteToStderr(const char *s, size_t len) {
#if defined(ABSL_HAVE_SYSCALL_WRITE)
@ -207,6 +207,15 @@ void RawLog(absl::LogSeverity severity, const char* file, int line,
va_end(ap);
}
// Non-formatting version of RawLog().
//
// TODO(gfalcon): When string_view no longer depends on base, change this
// interface to take its message as a string_view instead.
static void DefaultInternalLog(absl::LogSeverity severity, const char* file,
int line, const std::string& message) {
RawLog(severity, file, line, "%s", message.c_str());
}
bool RawLoggingFullySupported() {
#ifdef ABSL_LOW_LEVEL_WRITE_SUPPORTED
return true;
@ -215,6 +224,13 @@ bool RawLoggingFullySupported() {
#endif // !ABSL_LOW_LEVEL_WRITE_SUPPORTED
}
ABSL_CONST_INIT absl::base_internal::AtomicHook<InternalLogFunction>
internal_log_function(DefaultInternalLog);
void RegisterInternalLogFunction(InternalLogFunction func) {
internal_log_function.Store(func);
}
} // namespace raw_logging_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl

@ -19,7 +19,10 @@
#ifndef ABSL_BASE_INTERNAL_RAW_LOGGING_H_
#define ABSL_BASE_INTERNAL_RAW_LOGGING_H_
#include <string>
#include "absl/base/attributes.h"
#include "absl/base/internal/atomic_hook.h"
#include "absl/base/log_severity.h"
#include "absl/base/macros.h"
#include "absl/base/port.h"
@ -57,6 +60,34 @@
} \
} while (0)
// ABSL_INTERNAL_LOG and ABSL_INTERNAL_CHECK work like the RAW variants above,
// except that if the richer log library is linked into the binary, we dispatch
// to that instead. This is potentially useful for internal logging and
// assertions, where we are using RAW_LOG neither for its async-signal-safety
// nor for its non-allocating nature, but rather because raw logging has very
// few other dependencies.
//
// The API is a subset of the above: each macro only takes two arguments. Use
// StrCat if you need to build a richer message.
#define ABSL_INTERNAL_LOG(severity, message) \
do { \
constexpr const char* absl_raw_logging_internal_basename = \
::absl::raw_logging_internal::Basename(__FILE__, \
sizeof(__FILE__) - 1); \
::absl::raw_logging_internal::internal_log_function( \
ABSL_RAW_LOGGING_INTERNAL_##severity, \
absl_raw_logging_internal_basename, __LINE__, message); \
} while (0)
#define ABSL_INTERNAL_CHECK(condition, message) \
do { \
if (ABSL_PREDICT_FALSE(!(condition))) { \
std::string death_message = "Check " #condition " failed: "; \
death_message += std::string(message); \
ABSL_INTERNAL_LOG(FATAL, death_message); \
} \
} while (0)
#define ABSL_RAW_LOGGING_INTERNAL_INFO ::absl::LogSeverity::kInfo
#define ABSL_RAW_LOGGING_INTERNAL_WARNING ::absl::LogSeverity::kWarning
#define ABSL_RAW_LOGGING_INTERNAL_ERROR ::absl::LogSeverity::kError
@ -65,7 +96,7 @@
::absl::NormalizeLogSeverity(severity)
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace raw_logging_internal {
// Helper function to implement ABSL_RAW_LOG
@ -84,7 +115,7 @@ void SafeWriteToStderr(const char *s, size_t len);
// compile-time function to get the "base" filename, that is, the part of
// a filename after the last "/" or "\" path separator. The search starts at
// the end of the std::string; the second parameter is the length of the std::string.
// the end of the string; the second parameter is the length of the string.
constexpr const char* Basename(const char* fname, int offset) {
return offset == 0 || fname[offset - 1] == '/' || fname[offset - 1] == '\\'
? fname + offset
@ -132,8 +163,20 @@ using LogPrefixHook = bool (*)(absl::LogSeverity severity, const char* file,
using AbortHook = void (*)(const char* file, int line, const char* buf_start,
const char* prefix_end, const char* buf_end);
// Internal logging function for ABSL_INTERNAL_LOG to dispatch to.
//
// TODO(gfalcon): When string_view no longer depends on base, change this
// interface to take its message as a string_view instead.
using InternalLogFunction = void (*)(absl::LogSeverity severity,
const char* file, int line,
const std::string& message);
extern base_internal::AtomicHook<InternalLogFunction> internal_log_function;
void RegisterInternalLogFunction(InternalLogFunction func);
} // namespace raw_logging_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_BASE_INTERNAL_RAW_LOGGING_H_

@ -19,7 +19,7 @@
#define ABSL_BASE_INTERNAL_SCHEDULING_MODE_H_
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
// Used to describe how a thread may be scheduled. Typically associated with
@ -50,7 +50,7 @@ enum SchedulingMode {
};
} // namespace base_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_BASE_INTERNAL_SCHEDULING_MODE_H_

@ -54,7 +54,7 @@
// holder to acquire the lock. There may be outstanding waiter(s).
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
ABSL_CONST_INIT static base_internal::AtomicHook<void (*)(const void *lock,
@ -96,13 +96,9 @@ void SpinLock::InitLinkerInitializedAndCooperative() {
}
// Monitor the lock to see if its value changes within some time period
// (adaptive_spin_count loop iterations). A timestamp indicating
// when the thread initially started waiting for the lock is passed in via
// the initial_wait_timestamp value. The total wait time in cycles for the
// lock is returned in the wait_cycles parameter. The last value read
// from the lock is returned from the method.
uint32_t SpinLock::SpinLoop(int64_t initial_wait_timestamp,
uint32_t *wait_cycles) {
// (adaptive_spin_count loop iterations). The last value read from the lock
// is returned from the method.
uint32_t SpinLock::SpinLoop() {
// We are already in the slow path of SpinLock, initialize the
// adaptive_spin_count here.
ABSL_CONST_INIT static absl::once_flag init_adaptive_spin_count;
@ -116,22 +112,21 @@ uint32_t SpinLock::SpinLoop(int64_t initial_wait_timestamp,
do {
lock_value = lockword_.load(std::memory_order_relaxed);
} while ((lock_value & kSpinLockHeld) != 0 && --c > 0);
uint32_t spin_loop_wait_cycles =
EncodeWaitCycles(initial_wait_timestamp, CycleClock::Now());
*wait_cycles = spin_loop_wait_cycles;
return TryLockInternal(lock_value, spin_loop_wait_cycles);
return lock_value;
}
void SpinLock::SlowLock() {
uint32_t lock_value = SpinLoop();
lock_value = TryLockInternal(lock_value, 0);
if ((lock_value & kSpinLockHeld) == 0) {
return;
}
// The lock was not obtained initially, so this thread needs to wait for
// it. Record the current timestamp in the local variable wait_start_time
// so the total wait time can be stored in the lockword once this thread
// obtains the lock.
int64_t wait_start_time = CycleClock::Now();
uint32_t wait_cycles;
uint32_t lock_value = SpinLoop(wait_start_time, &wait_cycles);
uint32_t wait_cycles = 0;
int lock_wait_call_count = 0;
while ((lock_value & kSpinLockHeld) != 0) {
// If the lock is currently held, but not marked as having a sleeper, mark
@ -142,7 +137,7 @@ void SpinLock::SlowLock() {
// owner to think it experienced contention.
if (lockword_.compare_exchange_strong(
lock_value, lock_value | kSpinLockSleeper,
std::memory_order_acquire, std::memory_order_relaxed)) {
std::memory_order_relaxed, std::memory_order_relaxed)) {
// Successfully transitioned to kSpinLockSleeper. Pass
// kSpinLockSleeper to the SpinLockWait routine to properly indicate
// the last lock_value observed.
@ -171,7 +166,9 @@ void SpinLock::SlowLock() {
ABSL_TSAN_MUTEX_POST_DIVERT(this, 0);
// Spin again after returning from the wait routine to give this thread
// some chance of obtaining the lock.
lock_value = SpinLoop(wait_start_time, &wait_cycles);
lock_value = SpinLoop();
wait_cycles = EncodeWaitCycles(wait_start_time, CycleClock::Now());
lock_value = TryLockInternal(lock_value, wait_cycles);
}
}
@ -207,14 +204,20 @@ uint32_t SpinLock::EncodeWaitCycles(int64_t wait_start_time,
(wait_end_time - wait_start_time) >> PROFILE_TIMESTAMP_SHIFT;
// Return a representation of the time spent waiting that can be stored in
// the lock word's upper bits. bit_cast is required as Atomic32 is signed.
const uint32_t clamped = static_cast<uint32_t>(
// the lock word's upper bits.
uint32_t clamped = static_cast<uint32_t>(
std::min(scaled_wait_time, kMaxWaitTime) << LOCKWORD_RESERVED_SHIFT);
// bump up value if necessary to avoid returning kSpinLockSleeper.
const uint32_t after_spinlock_sleeper =
if (clamped == 0) {
return kSpinLockSleeper; // Just wake waiters, but don't record contention.
}
// Bump up value if necessary to avoid returning kSpinLockSleeper.
const uint32_t kMinWaitTime =
kSpinLockSleeper + (1 << LOCKWORD_RESERVED_SHIFT);
return clamped == kSpinLockSleeper ? after_spinlock_sleeper : clamped;
if (clamped == kSpinLockSleeper) {
return kMinWaitTime;
}
return clamped;
}
uint64_t SpinLock::DecodeWaitCycles(uint32_t lock_value) {
@ -226,5 +229,5 @@ uint64_t SpinLock::DecodeWaitCycles(uint32_t lock_value) {
}
} // namespace base_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl

@ -45,7 +45,7 @@
#include "absl/base/thread_annotations.h"
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
class LOCKABLE SpinLock {
@ -102,7 +102,7 @@ class LOCKABLE SpinLock {
inline void Unlock() UNLOCK_FUNCTION() {
ABSL_TSAN_MUTEX_PRE_UNLOCK(this, 0);
uint32_t lock_value = lockword_.load(std::memory_order_relaxed);
lockword_.store(lock_value & kSpinLockCooperative,
lock_value = lockword_.exchange(lock_value & kSpinLockCooperative,
std::memory_order_release);
if ((lock_value & kSpinLockDisabledScheduling) != 0) {
@ -162,7 +162,7 @@ class LOCKABLE SpinLock {
void InitLinkerInitializedAndCooperative();
void SlowLock() ABSL_ATTRIBUTE_COLD;
void SlowUnlock(uint32_t lock_value) ABSL_ATTRIBUTE_COLD;
uint32_t SpinLoop(int64_t initial_wait_timestamp, uint32_t* wait_cycles);
uint32_t SpinLoop();
inline bool TryLockImpl() {
uint32_t lock_value = lockword_.load(std::memory_order_relaxed);
@ -235,7 +235,7 @@ inline uint32_t SpinLock::TryLockInternal(uint32_t lock_value,
}
} // namespace base_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_BASE_INTERNAL_SPINLOCK_H_

@ -0,0 +1,52 @@
// Copyright 2018 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// See also //absl/synchronization:mutex_benchmark for a comparison of SpinLock
// and Mutex performance under varying levels of contention.
#include "absl/base/internal/raw_logging.h"
#include "absl/base/internal/scheduling_mode.h"
#include "absl/base/internal/spinlock.h"
#include "absl/synchronization/internal/create_thread_identity.h"
#include "benchmark/benchmark.h"
namespace {
template <absl::base_internal::SchedulingMode scheduling_mode>
static void BM_SpinLock(benchmark::State& state) {
// Ensure a ThreadIdentity is installed.
ABSL_INTERNAL_CHECK(
absl::synchronization_internal::GetOrCreateCurrentThreadIdentity() !=
nullptr,
"GetOrCreateCurrentThreadIdentity() failed");
static auto* spinlock = new absl::base_internal::SpinLock(scheduling_mode);
for (auto _ : state) {
absl::base_internal::SpinLockHolder holder(spinlock);
}
}
BENCHMARK_TEMPLATE(BM_SpinLock,
absl::base_internal::SCHEDULE_KERNEL_ONLY)
->UseRealTime()
->Threads(1)
->ThreadPerCpu();
BENCHMARK_TEMPLATE(BM_SpinLock,
absl::base_internal::SCHEDULE_COOPERATIVE_AND_KERNEL)
->UseRealTime()
->Threads(1)
->ThreadPerCpu();
} // namespace

@ -0,0 +1,72 @@
// Copyright 2018 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// This file is a Linux-specific part of spinlock_wait.cc
#include <linux/futex.h>
#include <sys/syscall.h>
#include <unistd.h>
#include <atomic>
#include <cerrno>
#include <climits>
#include <cstdint>
#include <ctime>
#include "absl/base/attributes.h"
// The SpinLock lockword is `std::atomic<uint32_t>`. Here we assert that
// `std::atomic<uint32_t>` is bitwise equivalent of the `int` expected
// by SYS_futex. We also assume that reads/writes done to the lockword
// by SYS_futex have rational semantics with regard to the
// std::atomic<> API. C++ provides no guarantees of these assumptions,
// but they are believed to hold in practice.
static_assert(sizeof(std::atomic<uint32_t>) == sizeof(int),
"SpinLock lockword has the wrong size for a futex");
// Some Android headers are missing these definitions even though they
// support these futex operations.
#ifdef __BIONIC__
#ifndef SYS_futex
#define SYS_futex __NR_futex
#endif
#ifndef FUTEX_PRIVATE_FLAG
#define FUTEX_PRIVATE_FLAG 128
#endif
#endif
extern "C" {
ABSL_ATTRIBUTE_WEAK void AbslInternalSpinLockDelay(
std::atomic<uint32_t> *w, uint32_t value, int loop,
absl::base_internal::SchedulingMode) {
if (loop != 0) {
int save_errno = errno;
struct timespec tm;
tm.tv_sec = 0;
// Increase the delay; we expect (but do not rely on) explicit wakeups.
// We don't rely on explicit wakeups because we intentionally allow for
// a race on the kSpinLockSleeper bit.
tm.tv_nsec = 16 * absl::base_internal::SpinLockSuggestedDelayNS(loop);
syscall(SYS_futex, w, FUTEX_WAIT | FUTEX_PRIVATE_FLAG, value, &tm);
errno = save_errno;
}
}
ABSL_ATTRIBUTE_WEAK void AbslInternalSpinLockWake(std::atomic<uint32_t> *w,
bool all) {
syscall(SYS_futex, w, FUTEX_WAKE | FUTEX_PRIVATE_FLAG, all ? INT_MAX : 1, 0);
}
} // extern "C"

@ -13,7 +13,7 @@
// limitations under the License.
// The OS-specific header included below must provide two calls:
// base::subtle::SpinLockDelay() and base::subtle::SpinLockWake().
// AbslInternalSpinLockDelay() and AbslInternalSpinLockWake().
// See spinlock_wait.h for the specs.
#include <atomic>
@ -23,6 +23,8 @@
#if defined(_WIN32)
#include "absl/base/internal/spinlock_win32.inc"
#elif defined(__linux__)
#include "absl/base/internal/spinlock_linux.inc"
#elif defined(__akaros__)
#include "absl/base/internal/spinlock_akaros.inc"
#else
@ -30,20 +32,21 @@
#endif
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
// See spinlock_wait.h for spec.
uint32_t SpinLockWait(std::atomic<uint32_t> *w, int n,
const SpinLockWaitTransition trans[],
base_internal::SchedulingMode scheduling_mode) {
for (int loop = 0; ; loop++) {
int loop = 0;
for (;;) {
uint32_t v = w->load(std::memory_order_acquire);
int i;
for (i = 0; i != n && v != trans[i].from; i++) {
}
if (i == n) {
SpinLockDelay(w, v, loop, scheduling_mode); // no matching transition
SpinLockDelay(w, v, ++loop, scheduling_mode); // no matching transition
} else if (trans[i].to == v || // null transition
w->compare_exchange_strong(v, trans[i].to,
std::memory_order_acquire,
@ -77,5 +80,5 @@ int SpinLockSuggestedDelayNS(int loop) {
}
} // namespace base_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl

@ -24,7 +24,7 @@
#include "absl/base/internal/scheduling_mode.h"
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
// SpinLockWait() waits until it can perform one of several transitions from
@ -63,7 +63,7 @@ void SpinLockDelay(std::atomic<uint32_t> *w, uint32_t value, int loop,
int SpinLockSuggestedDelayNS(int loop);
} // namespace base_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl
// In some build configurations we pass --detect-odr-violations to the

@ -56,7 +56,7 @@
#include "absl/base/internal/unscaledcycleclock.h"
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
static once_flag init_system_info_once;
@ -402,5 +402,5 @@ pid_t GetTID() {
#endif
} // namespace base_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl

@ -33,7 +33,7 @@
#include "absl/base/port.h"
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
// Nominal core processor cycles per second of each processor. This is _not_
@ -59,7 +59,7 @@ using pid_t = DWORD;
pid_t GetTID();
} // namespace base_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_BASE_INTERNAL_SYSINFO_H_

@ -28,7 +28,7 @@
#include "absl/synchronization/mutex.h"
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
namespace {
@ -96,5 +96,5 @@ TEST(SysinfoTest, LinuxGetTID) {
} // namespace
} // namespace base_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl

@ -28,7 +28,7 @@
#include "absl/base/internal/spinlock.h"
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
#if ABSL_THREAD_IDENTITY_MODE != ABSL_THREAD_IDENTITY_MODE_USE_CPP11
@ -69,6 +69,14 @@ void SetCurrentThreadIdentity(
// NOTE: Not async-safe. But can be open-coded.
absl::call_once(init_thread_identity_key_once, AllocateThreadIdentityKey,
reclaimer);
#ifdef __EMSCRIPTEN__
// Emscripten PThread implementation does not support signals.
// See https://kripken.github.io/emscripten-site/docs/porting/pthreads.html
// for more information.
pthread_setspecific(thread_identity_pthread_key,
reinterpret_cast<void*>(identity));
#else
// We must mask signals around the call to setspecific as with current glibc,
// a concurrent getspecific (needed for GetCurrentThreadIdentityIfPresent())
// may zero our value.
@ -82,6 +90,8 @@ void SetCurrentThreadIdentity(
pthread_setspecific(thread_identity_pthread_key,
reinterpret_cast<void*>(identity));
pthread_sigmask(SIG_SETMASK, &curr_signals, nullptr);
#endif // !__EMSCRIPTEN__
#elif ABSL_THREAD_IDENTITY_MODE == ABSL_THREAD_IDENTITY_MODE_USE_TLS
// NOTE: Not async-safe. But can be open-coded.
absl::call_once(init_thread_identity_key_once, AllocateThreadIdentityKey,
@ -121,5 +131,5 @@ ThreadIdentity* CurrentThreadIdentityIfPresent() {
#endif
} // namespace base_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl

@ -33,7 +33,7 @@
#include "absl/base/internal/per_thread_tls.h"
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
struct SynchLocksHeld;
struct SynchWaitParams;
@ -237,6 +237,6 @@ inline ThreadIdentity* CurrentThreadIdentityIfPresent() {
#endif
} // namespace base_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_BASE_INTERNAL_THREAD_IDENTITY_H_

@ -25,7 +25,7 @@
#include "absl/synchronization/mutex.h"
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
namespace {
@ -124,5 +124,5 @@ TEST(ThreadIdentityTest, ReusedThreadIdentityMutexTest) {
} // namespace
} // namespace base_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl

@ -22,7 +22,7 @@
#include "absl/base/internal/raw_logging.h"
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
namespace {
@ -31,8 +31,8 @@ template <typename T>
#ifdef ABSL_HAVE_EXCEPTIONS
throw error;
#else
ABSL_RAW_LOG(ERROR, "%s", error.what());
abort();
ABSL_RAW_LOG(FATAL, "%s", error.what());
std::abort();
#endif
}
} // namespace
@ -104,5 +104,5 @@ void ThrowStdBadFunctionCall() { Throw(std::bad_function_call()); }
void ThrowStdBadAlloc() { Throw(std::bad_alloc()); }
} // namespace base_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl

@ -20,7 +20,7 @@
#include <string>
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
// Helper functions that allow throwing exceptions consistently from anywhere.
@ -67,7 +67,7 @@ namespace base_internal {
// [[noreturn]] void ThrowStdBadArrayNewLength();
} // namespace base_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_BASE_INTERNAL_THROW_DELEGATE_H_

@ -65,7 +65,8 @@ void __sanitizer_unaligned_store64(void *p, uint64_t v);
} // extern "C"
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
inline uint16_t UnalignedLoad16(const void *p) {
return __sanitizer_unaligned_load16(p);
@ -91,19 +92,71 @@ inline void UnalignedStore64(void *p, uint64_t v) {
__sanitizer_unaligned_store64(p, v);
}
} // inline namespace lts_2018_06_20
} // namespace base_internal
} // inline namespace lts_2018_12_18
} // namespace absl
#define ABSL_INTERNAL_UNALIGNED_LOAD16(_p) (absl::UnalignedLoad16(_p))
#define ABSL_INTERNAL_UNALIGNED_LOAD32(_p) (absl::UnalignedLoad32(_p))
#define ABSL_INTERNAL_UNALIGNED_LOAD64(_p) (absl::UnalignedLoad64(_p))
#define ABSL_INTERNAL_UNALIGNED_LOAD16(_p) \
(absl::base_internal::UnalignedLoad16(_p))
#define ABSL_INTERNAL_UNALIGNED_LOAD32(_p) \
(absl::base_internal::UnalignedLoad32(_p))
#define ABSL_INTERNAL_UNALIGNED_LOAD64(_p) \
(absl::base_internal::UnalignedLoad64(_p))
#define ABSL_INTERNAL_UNALIGNED_STORE16(_p, _val) \
(absl::base_internal::UnalignedStore16(_p, _val))
#define ABSL_INTERNAL_UNALIGNED_STORE32(_p, _val) \
(absl::base_internal::UnalignedStore32(_p, _val))
#define ABSL_INTERNAL_UNALIGNED_STORE64(_p, _val) \
(absl::base_internal::UnalignedStore64(_p, _val))
#elif defined(UNDEFINED_BEHAVIOR_SANITIZER)
namespace absl {
inline namespace lts_2018_12_18 {
namespace base_internal {
inline uint16_t UnalignedLoad16(const void *p) {
uint16_t t;
memcpy(&t, p, sizeof t);
return t;
}
inline uint32_t UnalignedLoad32(const void *p) {
uint32_t t;
memcpy(&t, p, sizeof t);
return t;
}
inline uint64_t UnalignedLoad64(const void *p) {
uint64_t t;
memcpy(&t, p, sizeof t);
return t;
}
inline void UnalignedStore16(void *p, uint16_t v) { memcpy(p, &v, sizeof v); }
inline void UnalignedStore32(void *p, uint32_t v) { memcpy(p, &v, sizeof v); }
inline void UnalignedStore64(void *p, uint64_t v) { memcpy(p, &v, sizeof v); }
} // namespace base_internal
} // inline namespace lts_2018_12_18
} // namespace absl
#define ABSL_INTERNAL_UNALIGNED_LOAD16(_p) \
(absl::base_internal::UnalignedLoad16(_p))
#define ABSL_INTERNAL_UNALIGNED_LOAD32(_p) \
(absl::base_internal::UnalignedLoad32(_p))
#define ABSL_INTERNAL_UNALIGNED_LOAD64(_p) \
(absl::base_internal::UnalignedLoad64(_p))
#define ABSL_INTERNAL_UNALIGNED_STORE16(_p, _val) \
(absl::UnalignedStore16(_p, _val))
(absl::base_internal::UnalignedStore16(_p, _val))
#define ABSL_INTERNAL_UNALIGNED_STORE32(_p, _val) \
(absl::UnalignedStore32(_p, _val))
(absl::base_internal::UnalignedStore32(_p, _val))
#define ABSL_INTERNAL_UNALIGNED_STORE64(_p, _val) \
(absl::UnalignedStore64(_p, _val))
(absl::base_internal::UnalignedStore64(_p, _val))
#elif defined(__x86_64__) || defined(_M_X64) || defined(__i386) || \
defined(_M_IX86) || defined(__ppc__) || defined(__PPC__) || \
@ -160,8 +213,8 @@ inline void UnalignedStore64(void *p, uint64_t v) {
// so we do that.
namespace absl {
inline namespace lts_2018_06_20 {
namespace internal {
inline namespace lts_2018_12_18 {
namespace base_internal {
struct Unaligned16Struct {
uint16_t value;
@ -173,24 +226,27 @@ struct Unaligned32Struct {
uint8_t dummy; // To make the size non-power-of-two.
} ABSL_ATTRIBUTE_PACKED;
} // namespace internal
} // inline namespace lts_2018_06_20
} // namespace base_internal
} // inline namespace lts_2018_12_18
} // namespace absl
#define ABSL_INTERNAL_UNALIGNED_LOAD16(_p) \
((reinterpret_cast<const ::absl::internal::Unaligned16Struct *>(_p))->value)
((reinterpret_cast<const ::absl::base_internal::Unaligned16Struct *>(_p)) \
->value)
#define ABSL_INTERNAL_UNALIGNED_LOAD32(_p) \
((reinterpret_cast<const ::absl::internal::Unaligned32Struct *>(_p))->value)
((reinterpret_cast<const ::absl::base_internal::Unaligned32Struct *>(_p)) \
->value)
#define ABSL_INTERNAL_UNALIGNED_STORE16(_p, _val) \
((reinterpret_cast< ::absl::internal::Unaligned16Struct *>(_p))->value = \
(_val))
((reinterpret_cast< ::absl::base_internal::Unaligned16Struct *>(_p)) \
->value = (_val))
#define ABSL_INTERNAL_UNALIGNED_STORE32(_p, _val) \
((reinterpret_cast< ::absl::internal::Unaligned32Struct *>(_p))->value = \
(_val))
((reinterpret_cast< ::absl::base_internal::Unaligned32Struct *>(_p)) \
->value = (_val))
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
inline uint64_t UnalignedLoad64(const void *p) {
uint64_t t;
@ -200,12 +256,14 @@ inline uint64_t UnalignedLoad64(const void *p) {
inline void UnalignedStore64(void *p, uint64_t v) { memcpy(p, &v, sizeof v); }
} // inline namespace lts_2018_06_20
} // namespace base_internal
} // inline namespace lts_2018_12_18
} // namespace absl
#define ABSL_INTERNAL_UNALIGNED_LOAD64(_p) (absl::UnalignedLoad64(_p))
#define ABSL_INTERNAL_UNALIGNED_LOAD64(_p) \
(absl::base_internal::UnalignedLoad64(_p))
#define ABSL_INTERNAL_UNALIGNED_STORE64(_p, _val) \
(absl::UnalignedStore64(_p, _val))
(absl::base_internal::UnalignedStore64(_p, _val))
#else
@ -217,7 +275,8 @@ inline void UnalignedStore64(void *p, uint64_t v) { memcpy(p, &v, sizeof v); }
// unaligned loads and stores.
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
inline uint16_t UnalignedLoad16(const void *p) {
uint16_t t;
@ -243,19 +302,23 @@ inline void UnalignedStore32(void *p, uint32_t v) { memcpy(p, &v, sizeof v); }
inline void UnalignedStore64(void *p, uint64_t v) { memcpy(p, &v, sizeof v); }
} // inline namespace lts_2018_06_20
} // namespace base_internal
} // inline namespace lts_2018_12_18
} // namespace absl
#define ABSL_INTERNAL_UNALIGNED_LOAD16(_p) (absl::UnalignedLoad16(_p))
#define ABSL_INTERNAL_UNALIGNED_LOAD32(_p) (absl::UnalignedLoad32(_p))
#define ABSL_INTERNAL_UNALIGNED_LOAD64(_p) (absl::UnalignedLoad64(_p))
#define ABSL_INTERNAL_UNALIGNED_LOAD16(_p) \
(absl::base_internal::UnalignedLoad16(_p))
#define ABSL_INTERNAL_UNALIGNED_LOAD32(_p) \
(absl::base_internal::UnalignedLoad32(_p))
#define ABSL_INTERNAL_UNALIGNED_LOAD64(_p) \
(absl::base_internal::UnalignedLoad64(_p))
#define ABSL_INTERNAL_UNALIGNED_STORE16(_p, _val) \
(absl::UnalignedStore16(_p, _val))
(absl::base_internal::UnalignedStore16(_p, _val))
#define ABSL_INTERNAL_UNALIGNED_STORE32(_p, _val) \
(absl::UnalignedStore32(_p, _val))
(absl::base_internal::UnalignedStore32(_p, _val))
#define ABSL_INTERNAL_UNALIGNED_STORE64(_p, _val) \
(absl::UnalignedStore64(_p, _val))
(absl::base_internal::UnalignedStore64(_p, _val))
#endif

@ -27,7 +27,7 @@
#include "absl/base/internal/sysinfo.h"
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
#if defined(__i386__)
@ -97,7 +97,7 @@ double UnscaledCycleClock::Frequency() {
#endif
} // namespace base_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_USE_UNSCALED_CYCLECLOCK

@ -84,7 +84,7 @@
#define ABSL_INTERNAL_UNSCALED_CYCLECLOCK_FREQUENCY_IS_CPU_FREQUENCY
#endif
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace time_internal {
class UnscaledCycleClockWrapperForGetCurrentTime;
} // namespace time_internal
@ -114,7 +114,7 @@ class UnscaledCycleClock {
};
} // namespace base_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_USE_UNSCALED_CYCLECLOCK

@ -25,7 +25,7 @@
#include "absl/strings/str_cat.h"
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
namespace {
@ -198,5 +198,5 @@ TEST(InvokeTest, SfinaeFriendly) {
} // namespace
} // namespace base_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl

@ -21,7 +21,7 @@
#include "absl/base/attributes.h"
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
// Four severity levels are defined. Logging APIs should terminate the program
// when a message is logged at severity `kFatal`; the other levels have no
@ -40,7 +40,7 @@ constexpr std::array<absl::LogSeverity, 4> LogSeverities() {
absl::LogSeverity::kError, absl::LogSeverity::kFatal}};
}
// Returns the all-caps std::string representation (e.g. "INFO") of the specified
// Returns the all-caps string representation (e.g. "INFO") of the specified
// severity level if it is one of the normal levels and "UNKNOWN" otherwise.
constexpr const char* LogSeverityName(absl::LogSeverity s) {
return s == absl::LogSeverity::kInfo
@ -63,7 +63,7 @@ constexpr absl::LogSeverity NormalizeLogSeverity(int s) {
return NormalizeLogSeverity(static_cast<absl::LogSeverity>(s));
}
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_BASE_INTERNAL_LOG_SEVERITY_H_

@ -43,14 +43,14 @@
(sizeof(::absl::macros_internal::ArraySizeHelper(array)))
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace macros_internal {
// Note: this internal template function declaration is used by ABSL_ARRAYSIZE.
// The function doesn't need a definition, as we only use its type.
template <typename T, size_t N>
auto ArraySizeHelper(const T (&array)[N]) -> char (&)[N];
} // namespace macros_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl
// kLinkerInitialized
@ -74,13 +74,13 @@ auto ArraySizeHelper(const T (&array)[N]) -> char (&)[N];
// // Invocation
// static MyClass my_global(absl::base_internal::kLinkerInitialized);
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
enum LinkerInitialized {
kLinkerInitialized = 0,
};
} // namespace base_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl
// ABSL_FALLTHROUGH_INTENDED
@ -203,4 +203,14 @@ enum LinkerInitialized {
: [] { assert(false && #expr); }()) // NOLINT
#endif
#ifdef ABSL_HAVE_EXCEPTIONS
#define ABSL_INTERNAL_TRY try
#define ABSL_INTERNAL_CATCH_ANY catch (...)
#define ABSL_INTERNAL_RETHROW do { throw; } while (false)
#else // ABSL_HAVE_EXCEPTIONS
#define ABSL_INTERNAL_TRY if (true)
#define ABSL_INTERNAL_CATCH_ANY else if (false)
#define ABSL_INTERNAL_RETHROW do {} while (false)
#endif // ABSL_HAVE_EXCEPTIONS
#endif // ABSL_BASE_MACROS_H_

@ -18,12 +18,20 @@
#include "absl/base/internal/raw_logging.h"
#include <tuple>
#include "gtest/gtest.h"
#include "absl/strings/str_cat.h"
namespace {
TEST(RawLoggingCompilationTest, Log) {
ABSL_RAW_LOG(INFO, "RAW INFO: %d", 1);
ABSL_RAW_LOG(INFO, "RAW INFO: %d %d", 1, 2);
ABSL_RAW_LOG(INFO, "RAW INFO: %d %d %d", 1, 2, 3);
ABSL_RAW_LOG(INFO, "RAW INFO: %d %d %d %d", 1, 2, 3, 4);
ABSL_RAW_LOG(INFO, "RAW INFO: %d %d %d %d %d", 1, 2, 3, 4, 5);
ABSL_RAW_LOG(WARNING, "RAW WARNING: %d", 1);
ABSL_RAW_LOG(ERROR, "RAW ERROR: %d", 1);
}
@ -32,7 +40,7 @@ TEST(RawLoggingCompilationTest, PassingCheck) {
}
// Not all platforms support output from raw log, so we don't verify any
// particular output for RAW check failures (expecting the empty std::string
// particular output for RAW check failures (expecting the empty string
// accomplishes this). This test is primarily a compilation test, but we
// are verifying process death when EXPECT_DEATH works for a platform.
const char kExpectedDeathOutput[] = "";
@ -47,4 +55,25 @@ TEST(RawLoggingDeathTest, LogFatal) {
kExpectedDeathOutput);
}
TEST(InternalLog, CompilationTest) {
ABSL_INTERNAL_LOG(INFO, "Internal Log");
std::string log_msg = "Internal Log";
ABSL_INTERNAL_LOG(INFO, log_msg);
ABSL_INTERNAL_LOG(INFO, log_msg + " 2");
float d = 1.1f;
ABSL_INTERNAL_LOG(INFO, absl::StrCat("Internal log ", 3, " + ", d));
}
TEST(InternalLogDeathTest, FailingCheck) {
EXPECT_DEATH_IF_SUPPORTED(ABSL_INTERNAL_CHECK(1 == 0, "explanation"),
kExpectedDeathOutput);
}
TEST(InternalLogDeathTest, LogFatal) {
EXPECT_DEATH_IF_SUPPORTED(ABSL_INTERNAL_LOG(FATAL, "my dog has fleas"),
kExpectedDeathOutput);
}
} // namespace

@ -36,7 +36,7 @@ constexpr int32_t kNumThreads = 10;
constexpr int32_t kIters = 1000;
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
namespace base_internal {
// This is defined outside of anonymous namespace so that it can be
@ -156,7 +156,8 @@ TEST(SpinLock, WaitCyclesEncoding) {
// Test corner cases
int64_t start_time = time_distribution(generator);
EXPECT_EQ(0, SpinLockTest::EncodeWaitCycles(start_time, start_time));
EXPECT_EQ(kSpinLockSleeper,
SpinLockTest::EncodeWaitCycles(start_time, start_time));
EXPECT_EQ(0, SpinLockTest::DecodeWaitCycles(0));
EXPECT_EQ(0, SpinLockTest::DecodeWaitCycles(kLockwordReservedMask));
EXPECT_EQ(kMaxCycles & ~kProfileTimestampMask,
@ -264,5 +265,5 @@ TEST(SpinLockWithThreads, DoesNotDeadlock) {
} // namespace
} // namespace base_internal
} // inline namespace lts_2018_06_20
} // inline namespace lts_2018_12_18
} // namespace absl

@ -31,7 +31,6 @@
// that evaluate to a concrete mutex object whenever possible. If the mutex
// you want to refer to is not in scope, you may use a member pointer
// (e.g. &MyClass::mutex_) to refer to a mutex in some (unknown) object.
//
#ifndef ABSL_BASE_THREAD_ANNOTATIONS_H_
#define ABSL_BASE_THREAD_ANNOTATIONS_H_
@ -109,13 +108,23 @@
// The mutex is expected to be held both on entry to, and exit from, the
// function.
//
// An exclusive lock allows read-write access to the guarded data member(s), and
// only one thread can acquire a lock exclusively at any one time. A shared lock
// allows read-only access, and any number of threads can acquire a shared lock
// concurrently.
//
// Generally, non-const methods should be annotated with
// EXCLUSIVE_LOCKS_REQUIRED, while const methods should be annotated with
// SHARED_LOCKS_REQUIRED.
//
// Example:
//
// Mutex mu1, mu2;
// int a GUARDED_BY(mu1);
// int b GUARDED_BY(mu2);
//
// void foo() EXCLUSIVE_LOCKS_REQUIRED(mu1, mu2) { ... };
// void foo() EXCLUSIVE_LOCKS_REQUIRED(mu1, mu2) { ... }
// void bar() const SHARED_LOCKS_REQUIRED(mu1, mu2) { ... }
#define EXCLUSIVE_LOCKS_REQUIRED(...) \
THREAD_ANNOTATION_ATTRIBUTE__(exclusive_locks_required(__VA_ARGS__))

@ -0,0 +1,39 @@
#
# Copyright 2018 The Abseil Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""Creates config_setting that allows selecting based on 'compiler' value."""
def create_llvm_config(name, visibility):
# The "do_not_use_tools_cpp_compiler_present" attribute exists to
# distinguish between older versions of Bazel that do not support
# "@bazel_tools//tools/cpp:compiler" flag_value, and newer ones that do.
# In the future, the only way to select on the compiler will be through
# flag_values{"@bazel_tools//tools/cpp:compiler"} and the else branch can
# be removed.
if hasattr(cc_common, "do_not_use_tools_cpp_compiler_present"):
native.config_setting(
name = name,
flag_values = {
"@bazel_tools//tools/cpp:compiler": "llvm",
},
visibility = visibility,
)
else:
native.config_setting(
name = name,
values = {"compiler": "llvm"},
visibility = visibility,
)

@ -19,17 +19,38 @@ load(
"ABSL_DEFAULT_COPTS",
"ABSL_TEST_COPTS",
"ABSL_EXCEPTIONS_FLAG",
"ABSL_EXCEPTIONS_FLAG_LINKOPTS",
)
package(default_visibility = ["//visibility:public"])
licenses(["notice"]) # Apache 2.0
cc_library(
name = "compressed_tuple",
hdrs = ["internal/compressed_tuple.h"],
copts = ABSL_DEFAULT_COPTS,
deps = [
"//absl/utility",
],
)
cc_test(
name = "compressed_tuple_test",
srcs = ["internal/compressed_tuple_test.cc"],
copts = ABSL_TEST_COPTS,
deps = [
":compressed_tuple",
"@com_google_googletest//:gtest_main",
],
)
cc_library(
name = "fixed_array",
hdrs = ["fixed_array.h"],
copts = ABSL_DEFAULT_COPTS,
deps = [
":compressed_tuple",
"//absl/algorithm",
"//absl/base:core_headers",
"//absl/base:dynamic_annotations",
@ -42,9 +63,11 @@ cc_test(
name = "fixed_array_test",
srcs = ["fixed_array_test.cc"],
copts = ABSL_TEST_COPTS + ABSL_EXCEPTIONS_FLAG,
linkopts = ABSL_EXCEPTIONS_FLAG_LINKOPTS,
deps = [
":fixed_array",
"//absl/base:exception_testing",
"//absl/hash:hash_testing",
"//absl/memory",
"@com_google_googletest//:gtest_main",
],
@ -57,11 +80,24 @@ cc_test(
deps = [
":fixed_array",
"//absl/base:exception_testing",
"//absl/hash:hash_testing",
"//absl/memory",
"@com_google_googletest//:gtest_main",
],
)
cc_test(
name = "fixed_array_exception_safety_test",
srcs = ["fixed_array_exception_safety_test.cc"],
copts = ABSL_TEST_COPTS + ABSL_EXCEPTIONS_FLAG,
linkopts = ABSL_EXCEPTIONS_FLAG_LINKOPTS,
deps = [
":fixed_array",
"//absl/base:exception_safety_testing",
"@com_google_googletest//:gtest_main",
],
)
cc_test(
name = "fixed_array_benchmark",
srcs = ["fixed_array_benchmark.cc"],
@ -89,12 +125,14 @@ cc_test(
name = "inlined_vector_test",
srcs = ["inlined_vector_test.cc"],
copts = ABSL_TEST_COPTS + ABSL_EXCEPTIONS_FLAG,
linkopts = ABSL_EXCEPTIONS_FLAG_LINKOPTS,
deps = [
":inlined_vector",
":test_instance_tracker",
"//absl/base",
"//absl/base:core_headers",
"//absl/base:exception_testing",
"//absl/hash:hash_testing",
"//absl/memory",
"//absl/strings",
"@com_google_googletest//:gtest_main",
@ -111,6 +149,7 @@ cc_test(
"//absl/base",
"//absl/base:core_headers",
"//absl/base:exception_testing",
"//absl/hash:hash_testing",
"//absl/memory",
"//absl/strings",
"@com_google_googletest//:gtest_main",
@ -150,3 +189,462 @@ cc_test(
"@com_google_googletest//:gtest_main",
],
)
NOTEST_TAGS_NONMOBILE = [
"no_test_darwin_x86_64",
"no_test_loonix",
]
NOTEST_TAGS_MOBILE = [
"no_test_android_arm",
"no_test_android_arm64",
"no_test_android_x86",
"no_test_ios_x86_64",
]
NOTEST_TAGS = NOTEST_TAGS_MOBILE + NOTEST_TAGS_NONMOBILE
cc_library(
name = "flat_hash_map",
hdrs = ["flat_hash_map.h"],
copts = ABSL_DEFAULT_COPTS,
deps = [
":container_memory",
":hash_function_defaults",
":raw_hash_map",
"//absl/algorithm:container",
"//absl/memory",
],
)
cc_test(
name = "flat_hash_map_test",
srcs = ["flat_hash_map_test.cc"],
copts = ABSL_TEST_COPTS + ["-DUNORDERED_MAP_CXX17"],
tags = NOTEST_TAGS_NONMOBILE,
deps = [
":flat_hash_map",
":hash_generator_testing",
":unordered_map_constructor_test",
":unordered_map_lookup_test",
":unordered_map_modifiers_test",
"//absl/types:any",
"@com_google_googletest//:gtest_main",
],
)
cc_library(
name = "flat_hash_set",
hdrs = ["flat_hash_set.h"],
copts = ABSL_DEFAULT_COPTS,
deps = [
":container_memory",
":hash_function_defaults",
":raw_hash_set",
"//absl/algorithm:container",
"//absl/base:core_headers",
"//absl/memory",
],
)
cc_test(
name = "flat_hash_set_test",
srcs = ["flat_hash_set_test.cc"],
copts = ABSL_TEST_COPTS + ["-DUNORDERED_SET_CXX17"],
tags = NOTEST_TAGS_NONMOBILE,
deps = [
":flat_hash_set",
":hash_generator_testing",
":unordered_set_constructor_test",
":unordered_set_lookup_test",
":unordered_set_modifiers_test",
"//absl/memory",
"//absl/strings",
"@com_google_googletest//:gtest_main",
],
)
cc_library(
name = "node_hash_map",
hdrs = ["node_hash_map.h"],
copts = ABSL_DEFAULT_COPTS,
deps = [
":container_memory",
":hash_function_defaults",
":node_hash_policy",
":raw_hash_map",
"//absl/algorithm:container",
"//absl/memory",
],
)
cc_test(
name = "node_hash_map_test",
srcs = ["node_hash_map_test.cc"],
copts = ABSL_TEST_COPTS + ["-DUNORDERED_MAP_CXX17"],
tags = NOTEST_TAGS_NONMOBILE,
deps = [
":hash_generator_testing",
":node_hash_map",
":tracked",
":unordered_map_constructor_test",
":unordered_map_lookup_test",
":unordered_map_modifiers_test",
"@com_google_googletest//:gtest_main",
],
)
cc_library(
name = "node_hash_set",
hdrs = ["node_hash_set.h"],
copts = ABSL_DEFAULT_COPTS,
deps = [
":hash_function_defaults",
":node_hash_policy",
":raw_hash_set",
"//absl/algorithm:container",
"//absl/memory",
],
)
cc_test(
name = "node_hash_set_test",
srcs = ["node_hash_set_test.cc"],
copts = ABSL_TEST_COPTS + ["-DUNORDERED_SET_CXX17"],
tags = NOTEST_TAGS_NONMOBILE,
deps = [
":hash_generator_testing",
":node_hash_set",
":unordered_set_constructor_test",
":unordered_set_lookup_test",
":unordered_set_modifiers_test",
"@com_google_googletest//:gtest_main",
],
)
cc_library(
name = "container_memory",
hdrs = ["internal/container_memory.h"],
copts = ABSL_DEFAULT_COPTS,
deps = [
"//absl/memory",
"//absl/utility",
],
)
cc_test(
name = "container_memory_test",
srcs = ["internal/container_memory_test.cc"],
copts = ABSL_TEST_COPTS,
tags = NOTEST_TAGS_NONMOBILE,
deps = [
":container_memory",
"//absl/strings",
"@com_google_googletest//:gtest_main",
],
)
cc_library(
name = "hash_function_defaults",
hdrs = ["internal/hash_function_defaults.h"],
copts = ABSL_DEFAULT_COPTS,
deps = [
"//absl/base:config",
"//absl/hash",
"//absl/strings",
],
)
cc_test(
name = "hash_function_defaults_test",
srcs = ["internal/hash_function_defaults_test.cc"],
copts = ABSL_TEST_COPTS,
tags = NOTEST_TAGS,
deps = [
":hash_function_defaults",
"//absl/hash",
"//absl/strings",
"@com_google_googletest//:gtest_main",
],
)
cc_library(
name = "hash_generator_testing",
testonly = 1,
srcs = ["internal/hash_generator_testing.cc"],
hdrs = ["internal/hash_generator_testing.h"],
copts = ABSL_TEST_COPTS,
deps = [
":hash_policy_testing",
"//absl/meta:type_traits",
"//absl/strings",
],
)
cc_library(
name = "hash_policy_testing",
testonly = 1,
hdrs = ["internal/hash_policy_testing.h"],
copts = ABSL_TEST_COPTS,
deps = [
"//absl/hash",
"//absl/strings",
],
)
cc_test(
name = "hash_policy_testing_test",
srcs = ["internal/hash_policy_testing_test.cc"],
copts = ABSL_TEST_COPTS,
deps = [
":hash_policy_testing",
"@com_google_googletest//:gtest_main",
],
)
cc_library(
name = "hash_policy_traits",
hdrs = ["internal/hash_policy_traits.h"],
copts = ABSL_DEFAULT_COPTS,
deps = ["//absl/meta:type_traits"],
)
cc_test(
name = "hash_policy_traits_test",
srcs = ["internal/hash_policy_traits_test.cc"],
copts = ABSL_TEST_COPTS,
deps = [
":hash_policy_traits",
"@com_google_googletest//:gtest_main",
],
)
cc_library(
name = "hashtable_debug",
hdrs = ["internal/hashtable_debug.h"],
copts = ABSL_DEFAULT_COPTS,
deps = [
":hashtable_debug_hooks",
],
)
cc_library(
name = "hashtable_debug_hooks",
hdrs = ["internal/hashtable_debug_hooks.h"],
copts = ABSL_DEFAULT_COPTS,
)
cc_library(
name = "node_hash_policy",
hdrs = ["internal/node_hash_policy.h"],
copts = ABSL_DEFAULT_COPTS,
)
cc_test(
name = "node_hash_policy_test",
srcs = ["internal/node_hash_policy_test.cc"],
copts = ABSL_TEST_COPTS,
deps = [
":hash_policy_traits",
":node_hash_policy",
"@com_google_googletest//:gtest_main",
],
)
cc_library(
name = "raw_hash_map",
hdrs = ["internal/raw_hash_map.h"],
copts = ABSL_DEFAULT_COPTS,
deps = [
":container_memory",
":raw_hash_set",
],
)
cc_library(
name = "raw_hash_set",
srcs = ["internal/raw_hash_set.cc"],
hdrs = ["internal/raw_hash_set.h"],
copts = ABSL_DEFAULT_COPTS,
deps = [
":compressed_tuple",
":container_memory",
":hash_policy_traits",
":hashtable_debug_hooks",
":layout",
"//absl/base:bits",
"//absl/base:config",
"//absl/base:core_headers",
"//absl/base:endian",
"//absl/memory",
"//absl/meta:type_traits",
"//absl/types:optional",
"//absl/utility",
],
)
cc_test(
name = "raw_hash_set_test",
srcs = ["internal/raw_hash_set_test.cc"],
copts = ABSL_TEST_COPTS,
linkstatic = 1,
tags = NOTEST_TAGS,
deps = [
":container_memory",
":hash_function_defaults",
":hash_policy_testing",
":hashtable_debug",
":raw_hash_set",
"//absl/base",
"//absl/base:core_headers",
"//absl/strings",
"@com_google_googletest//:gtest_main",
],
)
cc_test(
name = "raw_hash_set_allocator_test",
size = "small",
srcs = ["internal/raw_hash_set_allocator_test.cc"],
copts = ABSL_TEST_COPTS,
deps = [
":raw_hash_set",
":tracked",
"//absl/base:core_headers",
"@com_google_googletest//:gtest_main",
],
)
cc_library(
name = "layout",
hdrs = ["internal/layout.h"],
copts = ABSL_DEFAULT_COPTS,
deps = [
"//absl/base:core_headers",
"//absl/meta:type_traits",
"//absl/strings",
"//absl/types:span",
"//absl/utility",
],
)
cc_test(
name = "layout_test",
size = "small",
srcs = ["internal/layout_test.cc"],
copts = ABSL_TEST_COPTS,
tags = NOTEST_TAGS,
visibility = ["//visibility:private"],
deps = [
":layout",
"//absl/base",
"//absl/base:core_headers",
"//absl/types:span",
"@com_google_googletest//:gtest_main",
],
)
cc_library(
name = "tracked",
testonly = 1,
hdrs = ["internal/tracked.h"],
copts = ABSL_TEST_COPTS,
)
cc_library(
name = "unordered_map_constructor_test",
testonly = 1,
hdrs = ["internal/unordered_map_constructor_test.h"],
copts = ABSL_TEST_COPTS,
deps = [
":hash_generator_testing",
":hash_policy_testing",
"@com_google_googletest//:gtest",
],
)
cc_library(
name = "unordered_map_lookup_test",
testonly = 1,
hdrs = ["internal/unordered_map_lookup_test.h"],
copts = ABSL_TEST_COPTS,
deps = [
":hash_generator_testing",
":hash_policy_testing",
"@com_google_googletest//:gtest",
],
)
cc_library(
name = "unordered_map_modifiers_test",
testonly = 1,
hdrs = ["internal/unordered_map_modifiers_test.h"],
copts = ABSL_TEST_COPTS,
deps = [
":hash_generator_testing",
":hash_policy_testing",
"@com_google_googletest//:gtest",
],
)
cc_library(
name = "unordered_set_constructor_test",
testonly = 1,
hdrs = ["internal/unordered_set_constructor_test.h"],
copts = ABSL_TEST_COPTS,
deps = [
":hash_generator_testing",
":hash_policy_testing",
"@com_google_googletest//:gtest",
],
)
cc_library(
name = "unordered_set_lookup_test",
testonly = 1,
hdrs = ["internal/unordered_set_lookup_test.h"],
copts = ABSL_TEST_COPTS,
deps = [
":hash_generator_testing",
":hash_policy_testing",
"@com_google_googletest//:gtest",
],
)
cc_library(
name = "unordered_set_modifiers_test",
testonly = 1,
hdrs = ["internal/unordered_set_modifiers_test.h"],
copts = ABSL_TEST_COPTS,
deps = [
":hash_generator_testing",
":hash_policy_testing",
"@com_google_googletest//:gtest",
],
)
cc_test(
name = "unordered_set_test",
srcs = ["internal/unordered_set_test.cc"],
copts = ABSL_TEST_COPTS,
tags = NOTEST_TAGS_NONMOBILE,
deps = [
":unordered_set_constructor_test",
":unordered_set_lookup_test",
":unordered_set_modifiers_test",
"@com_google_googletest//:gtest_main",
],
)
cc_test(
name = "unordered_map_test",
srcs = ["internal/unordered_map_test.cc"],
copts = ABSL_TEST_COPTS,
tags = NOTEST_TAGS_NONMOBILE,
deps = [
":unordered_map_constructor_test",
":unordered_map_lookup_test",
":unordered_map_modifiers_test",
"@com_google_googletest//:gtest_main",
],
)

@ -14,113 +14,674 @@
# limitations under the License.
#
# This is deprecated and will be removed in the future. It also doesn't do
# anything anyways. Prefer to use the library associated with the API you are
# using.
absl_cc_library(
NAME
container
SRCS
"internal/raw_hash_set.cc"
COPTS
${ABSL_DEFAULT_COPTS}
PUBLIC
)
absl_cc_library(
NAME
compressed_tuple
HDRS
"internal/compressed_tuple.h"
DEPS
absl::utility
PUBLIC
)
absl_cc_test(
NAME
compressed_tuple_test
SRCS
"internal/compressed_tuple_test.cc"
DEPS
absl::compressed_tuple
gmock_main
)
list(APPEND CONTAINER_PUBLIC_HEADERS
absl_cc_library(
NAME
fixed_array
HDRS
"fixed_array.h"
COPTS
${ABSL_DEFAULT_COPTS}
DEPS
absl::compressed_tuple
absl::algorithm
absl::core_headers
absl::dynamic_annotations
absl::throw_delegate
absl::memory
PUBLIC
)
absl_cc_test(
NAME
fixed_array_test
SRCS
"fixed_array_test.cc"
COPTS
${ABSL_EXCEPTIONS_FLAG}
LINKOPTS
${ABSL_EXCEPTIONS_FLAG_LINKOPTS}
DEPS
absl::fixed_array
absl::exception_testing
absl::hash_testing
absl::memory
gmock_main
)
absl_cc_test(
NAME
fixed_array_test_noexceptions
SRCS
"fixed_array_test.cc"
DEPS
absl::fixed_array
absl::exception_testing
absl::hash_testing
absl::memory
gmock_main
)
absl_cc_test(
NAME
fixed_array_exception_safety_test
SRCS
"fixed_array_exception_safety_test.cc"
COPTS
${ABSL_EXCEPTIONS_FLAG}
LINKOPTS
${ABSL_EXCEPTIONS_FLAG_LINKOPTS}
DEPS
absl::fixed_array
absl::exception_safety_testing
gmock_main
)
absl_cc_library(
NAME
inlined_vector
HDRS
"inlined_vector.h"
COPTS
${ABSL_DEFAULT_COPTS}
DEPS
absl::algorithm
absl::core_headers
absl::throw_delegate
absl::memory
PUBLIC
)
absl_cc_test(
NAME
inlined_vector_test
SRCS
"inlined_vector_test.cc"
COPTS
${ABSL_EXCEPTIONS_FLAG}
LINKOPTS
${ABSL_EXCEPTIONS_FLAG_LINKOPTS}
DEPS
absl::inlined_vector
absl::test_instance_tracker
absl::base
absl::core_headers
absl::exception_testing
absl::hash_testing
absl::memory
absl::strings
gmock_main
)
list(APPEND CONTAINER_INTERNAL_HEADERS
absl_cc_test(
NAME
inlined_vector_test_noexceptions
SRCS
"inlined_vector_test.cc"
DEPS
absl::inlined_vector
absl::test_instance_tracker
absl::base
absl::core_headers
absl::exception_testing
absl::hash_testing
absl::memory
absl::strings
gmock_main
)
absl_cc_library(
NAME
test_instance_tracker
HDRS
"internal/test_instance_tracker.h"
SRCS
"internal/test_instance_tracker.cc"
COPTS
${ABSL_DEFAULT_COPTS}
TESTONLY
)
absl_cc_test(
NAME
test_instance_tracker_test
SRCS
"internal/test_instance_tracker_test.cc"
DEPS
absl::test_instance_tracker
gmock_main
)
absl_header_library(
TARGET
absl_container
EXPORT_NAME
container
absl_cc_library(
NAME
flat_hash_map
HDRS
"flat_hash_map.h"
COPTS
${ABSL_DEFAULT_COPTS}
DEPS
absl::container_memory
absl::hash_function_defaults
absl::raw_hash_map
absl::algorithm_container
absl::memory
PUBLIC
)
absl_cc_test(
NAME
flat_hash_map_test
SRCS
"flat_hash_map_test.cc"
COPTS
"-DUNORDERED_MAP_CXX17"
DEPS
absl::flat_hash_map
absl::hash_generator_testing
absl::unordered_map_constructor_test
absl::unordered_map_lookup_test
absl::unordered_map_modifiers_test
absl::any
gmock_main
)
#
## TESTS
#
absl_cc_library(
NAME
flat_hash_set
HDRS
"flat_hash_set.h"
COPTS
${ABSL_DEFAULT_COPTS}
DEPS
absl::container_memory
absl::hash_function_defaults
absl::raw_hash_set
absl::algorithm_container
absl::core_headers
absl::memory
PUBLIC
)
list(APPEND TEST_INSTANCE_TRACKER_LIB_SRC
"internal/test_instance_tracker.cc"
${CONTAINER_PUBLIC_HEADERS}
${CONTAINER_INTERNAL_HEADERS}
absl_cc_test(
NAME
flat_hash_set_test
SRCS
"flat_hash_set_test.cc"
COPTS
"-DUNORDERED_SET_CXX17"
DEPS
absl::flat_hash_set
absl::hash_generator_testing
absl::unordered_set_constructor_test
absl::unordered_set_lookup_test
absl::unordered_set_modifiers_test
absl::memory
absl::strings
gmock_main
)
absl_cc_library(
NAME
node_hash_map
HDRS
"node_hash_map.h"
COPTS
${ABSL_DEFAULT_COPTS}
DEPS
absl::container_memory
absl::hash_function_defaults
absl::node_hash_policy
absl::raw_hash_map
absl::algorithm_container
absl::memory
PUBLIC
)
absl_library(
TARGET
test_instance_tracker_lib
SOURCES
${TEST_INSTANCE_TRACKER_LIB_SRC}
PUBLIC_LIBRARIES
absl::container
DISABLE_INSTALL
absl_cc_test(
NAME
node_hash_map_test
SRCS
"node_hash_map_test.cc"
COPTS
"-DUNORDERED_MAP_CXX17"
DEPS
absl::hash_generator_testing
absl::node_hash_map
absl::tracked
absl::unordered_map_constructor_test
absl::unordered_map_lookup_test
absl::unordered_map_modifiers_test
gmock_main
)
absl_cc_library(
NAME
node_hash_set
HDRS
"node_hash_set.h"
COPTS
${ABSL_DEFAULT_COPTS}
DEPS
absl::hash_function_defaults
absl::node_hash_policy
absl::raw_hash_set
absl::algorithm_container
absl::memory
PUBLIC
)
absl_cc_test(
NAME
node_hash_set_test
SRCS
"node_hash_set_test.cc"
COPTS
"-DUNORDERED_SET_CXX17"
DEPS
absl::hash_generator_testing
absl::node_hash_set
absl::unordered_set_constructor_test
absl::unordered_set_lookup_test
absl::unordered_set_modifiers_test
gmock_main
)
# test fixed_array_test
set(FIXED_ARRAY_TEST_SRC "fixed_array_test.cc")
set(FIXED_ARRAY_TEST_PUBLIC_LIBRARIES absl::base absl_throw_delegate test_instance_tracker_lib)
absl_cc_library(
NAME
container_memory
HDRS
"internal/container_memory.h"
COPTS
${ABSL_DEFAULT_COPTS}
DEPS
absl::memory
absl::utility
PUBLIC
)
absl_test(
TARGET
fixed_array_test
SOURCES
${FIXED_ARRAY_TEST_SRC}
PUBLIC_LIBRARIES
${FIXED_ARRAY_TEST_PUBLIC_LIBRARIES}
PRIVATE_COMPILE_FLAGS
${ABSL_EXCEPTIONS_FLAG}
absl_cc_test(
NAME
container_memory_test
SRCS
"internal/container_memory_test.cc"
DEPS
absl::container_memory
absl::strings
gmock_main
)
absl_cc_library(
NAME
hash_function_defaults
HDRS
"internal/hash_function_defaults.h"
COPTS
${ABSL_DEFAULT_COPTS}
DEPS
absl::config
absl::hash
absl::strings
PUBLIC
)
absl_cc_test(
NAME
hash_function_defaults_test
SRCS
"internal/hash_function_defaults_test.cc"
DEPS
absl::hash_function_defaults
absl::hash
absl::strings
gmock_main
)
absl_test(
TARGET
fixed_array_test_noexceptions
SOURCES
${FIXED_ARRAY_TEST_SRC}
PUBLIC_LIBRARIES
${FIXED_ARRAY_TEST_PUBLIC_LIBRARIES}
absl_cc_library(
NAME
hash_generator_testing
HDRS
"internal/hash_generator_testing.h"
SRCS
"internal/hash_generator_testing.cc"
COPTS
${ABSL_TEST_COPTS}
DEPS
absl::hash_policy_testing
absl::meta
absl::strings
TESTONLY
)
absl_cc_library(
NAME
hash_policy_testing
HDRS
"internal/hash_policy_testing.h"
COPTS
${ABSL_TEST_COPTS}
DEPS
absl::hash
absl::strings
TESTONLY
)
absl_cc_test(
NAME
hash_policy_testing_test
SRCS
"internal/hash_policy_testing_test.cc"
DEPS
absl::hash_policy_testing
gmock_main
)
# test inlined_vector_test
set(INLINED_VECTOR_TEST_SRC "inlined_vector_test.cc")
set(INLINED_VECTOR_TEST_PUBLIC_LIBRARIES absl::base absl_throw_delegate test_instance_tracker_lib)
absl_cc_library(
NAME
hash_policy_traits
HDRS
"internal/hash_policy_traits.h"
COPTS
${ABSL_DEFAULT_COPTS}
DEPS
absl::meta
PUBLIC
)
absl_test(
TARGET
inlined_vector_test
SOURCES
${INLINED_VECTOR_TEST_SRC}
PUBLIC_LIBRARIES
${INLINED_VECTOR_TEST_PUBLIC_LIBRARIES}
absl_cc_test(
NAME
hash_policy_traits_test
SRCS
"internal/hash_policy_traits_test.cc"
DEPS
absl::hash_policy_traits
gmock_main
)
absl_test(
TARGET
inlined_vector_test_noexceptions
SOURCES
${INLINED_VECTOR_TEST_SRC}
PUBLIC_LIBRARIES
${INLINED_VECTOR_TEST_PUBLIC_LIBRARIES}
PRIVATE_COMPILE_FLAGS
${ABSL_NOEXCEPTION_CXXFLAGS}
absl_cc_library(
NAME
hashtable_debug
HDRS
"internal/hashtable_debug.h"
COPTS
${ABSL_DEFAULT_COPTS}
DEPS
absl::hashtable_debug_hooks
)
absl_cc_library(
NAME
hashtable_debug_hooks
HDRS
"internal/hashtable_debug_hooks.h"
COPTS
${ABSL_DEFAULT_COPTS}
PUBLIC
)
absl_cc_library(
NAME
node_hash_policy
HDRS
"internal/node_hash_policy.h"
COPTS
${ABSL_DEFAULT_COPTS}
PUBLIC
)
# test test_instance_tracker_test
set(TEST_INSTANCE_TRACKER_TEST_SRC "internal/test_instance_tracker_test.cc")
set(TEST_INSTANCE_TRACKER_TEST_PUBLIC_LIBRARIES absl::base absl_throw_delegate test_instance_tracker_lib)
absl_cc_test(
NAME
node_hash_policy_test
SRCS
"internal/node_hash_policy_test.cc"
DEPS
absl::hash_policy_traits
absl::node_hash_policy
gmock_main
)
absl_cc_library(
NAME
raw_hash_map
HDRS
"internal/raw_hash_map.h"
COPTS
${ABSL_DEFAULT_COPTS}
DEPS
absl::container_memory
absl::raw_hash_set
PUBLIC
)
absl_test(
TARGET
test_instance_tracker_test
SOURCES
${TEST_INSTANCE_TRACKER_TEST_SRC}
PUBLIC_LIBRARIES
${TEST_INSTANCE_TRACKER_TEST_PUBLIC_LIBRARIES}
absl_cc_library(
NAME
raw_hash_set
HDRS
"internal/raw_hash_set.h"
SRCS
"internal/raw_hash_set.cc"
COPTS
${ABSL_DEFAULT_COPTS}
DEPS
absl::compressed_tuple
absl::container_memory
absl::hash_policy_traits
absl::hashtable_debug_hooks
absl::layout
absl::bits
absl::config
absl::core_headers
absl::endian
absl::memory
absl::meta
absl::optional
absl::utility
PUBLIC
)
absl_cc_test(
NAME
raw_hash_set_test
SRCS
"internal/raw_hash_set_test.cc"
DEPS
absl::container_memory
absl::hash_function_defaults
absl::hash_policy_testing
absl::hashtable_debug
absl::raw_hash_set
absl::base
absl::core_headers
absl::strings
gmock_main
)
absl_cc_test(
NAME
raw_hash_set_allocator_test
SRCS
"internal/raw_hash_set_allocator_test.cc"
DEPS
absl::raw_hash_set
absl::tracked
absl::core_headers
gmock_main
)
absl_cc_library(
NAME
layout
HDRS
"internal/layout.h"
COPTS
${ABSL_DEFAULT_COPTS}
DEPS
absl::core_headers
absl::meta
absl::strings
absl::span
absl::utility
PUBLIC
)
absl_cc_test(
NAME
layout_test
SRCS
"internal/layout_test.cc"
DEPS
absl::layout
absl::base
absl::core_headers
absl::span
gmock_main
)
absl_cc_library(
NAME
tracked
HDRS
"internal/tracked.h"
COPTS
${ABSL_TEST_COPTS}
TESTONLY
)
absl_cc_library(
NAME
unordered_map_constructor_test
HDRS
"internal/unordered_map_constructor_test.h"
COPTS
${ABSL_TEST_COPTS}
DEPS
absl::hash_generator_testing
absl::hash_policy_testing
gmock
TESTONLY
)
absl_cc_library(
NAME
unordered_map_lookup_test
HDRS
"internal/unordered_map_lookup_test.h"
COPTS
${ABSL_TEST_COPTS}
DEPS
absl::hash_generator_testing
absl::hash_policy_testing
gmock
TESTONLY
)
absl_cc_library(
NAME
unordered_map_modifiers_test
HDRS
"internal/unordered_map_modifiers_test.h"
COPTS
${ABSL_TEST_COPTS}
DEPS
absl::hash_generator_testing
absl::hash_policy_testing
gmock
TESTONLY
)
absl_cc_library(
NAME
unordered_set_constructor_test
HDRS
"internal/unordered_set_constructor_test.h"
COPTS
${ABSL_TEST_COPTS}
DEPS
absl::hash_generator_testing
absl::hash_policy_testing
gmock
TESTONLY
)
absl_cc_library(
NAME
unordered_set_lookup_test
HDRS
"internal/unordered_set_lookup_test.h"
COPTS
${ABSL_TEST_COPTS}
DEPS
absl::hash_generator_testing
absl::hash_policy_testing
gmock
TESTONLY
)
absl_cc_library(
NAME
unordered_set_modifiers_test
HDRS
"internal/unordered_set_modifiers_test.h"
COPTS
${ABSL_TEST_COPTS}
DEPS
absl::hash_generator_testing
absl::hash_policy_testing
gmock
TESTONLY
)
absl_cc_test(
NAME
unordered_set_test
SRCS
"internal/unordered_set_test.cc"
DEPS
absl::unordered_set_constructor_test
absl::unordered_set_lookup_test
absl::unordered_set_modifiers_test
gmock_main
)
absl_cc_test(
NAME
unordered_map_test
SRCS
"internal/unordered_map_test.cc"
DEPS
absl::unordered_map_constructor_test
absl::unordered_map_lookup_test
absl::unordered_map_modifiers_test
gmock_main
)

@ -1,4 +1,4 @@
// Copyright 2017 The Abseil Authors.
// Copyright 2018 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@ -47,10 +47,11 @@
#include "absl/base/macros.h"
#include "absl/base/optimization.h"
#include "absl/base/port.h"
#include "absl/container/internal/compressed_tuple.h"
#include "absl/memory/memory.h"
namespace absl {
inline namespace lts_2018_06_20 {
inline namespace lts_2018_12_18 {
constexpr static auto kFixedArrayUseDefault = static_cast<size_t>(-1);
@ -58,13 +59,13 @@ constexpr static auto kFixedArrayUseDefault = static_cast<size_t>(-1);
// FixedArray
// -----------------------------------------------------------------------------
//
// A `FixedArray` provides a run-time fixed-size array, allocating small arrays
// inline for efficiency and correctness.
// A `FixedArray` provides a run-time fixed-size array, allocating a small array
// inline for efficiency.
//
// Most users should not specify an `inline_elements` argument and let
// `FixedArray<>` automatically determine the number of elements
// `FixedArray` automatically determine the number of elements
// to store inline based on `sizeof(T)`. If `inline_elements` is specified, the
// `FixedArray<>` implementation will inline arrays of
// `FixedArray` implementation will use inline storage for arrays with a
// length <= `inline_elements`.
//
// Note that a `FixedArray` constructed with a `size_type` argument will
@ -77,65 +78,100 @@ constexpr static auto kFixedArrayUseDefault = static_cast<size_t>(-1);
// heap allocation, it will do so with global `::operator new[]()` and
// `::operator delete[]()`, even if T provides class-scope overrides for these
// operators.
template <typename T, size_t inlined = kFixedArrayUseDefault>
template <typename T, size_t N = kFixedArrayUseDefault,
typename A = std::allocator<T>>
class FixedArray {
static_assert(!std::is_array<T>::value || std::extent<T>::value > 0,
"Arrays with unknown bounds cannot be used with FixedArray.");
static constexpr size_t kInlineBytesDefault = 256;
using AllocatorTraits = std::allocator_traits<A>;
// std::iterator_traits isn't guaranteed to be SFINAE-friendly until C++17,
// but this seems to be mostly pedantic.
template <typename Iter>
using EnableIfForwardIterator = typename std::enable_if<
std::is_convertible<
typename std::iterator_traits<Iter>::iterator_category,
std::forward_iterator_tag>::value,
int>::type;
template <typename Iterator>
using EnableIfForwardIterator = absl::enable_if_t<std::is_convertible<
typename std::iterator_traits<Iterator>::iterator_category,
std::forward_iterator_tag>::value>;
static constexpr bool NoexceptCopyable() {
return std::is_nothrow_copy_constructible<StorageElement>::value &&
absl::allocator_is_nothrow<allocator_type>::value;
}
static constexpr bool NoexceptMovable() {
return std::is_nothrow_move_constructible<StorageElement>::value &&
absl::allocator_is_nothrow<allocator_type>::value;
}
static constexpr bool DefaultConstructorIsNonTrivial() {
return !absl::is_trivially_default_constructible<StorageElement>::value;
}
public:
// For playing nicely with stl:
using value_type = T;
using iterator = T*;
using const_iterator = const T*;
using allocator_type = typename AllocatorTraits::allocator_type;
using value_type = typename allocator_type::value_type;
using pointer = typename allocator_type::pointer;
using const_pointer = typename allocator_type::const_pointer;
using reference = typename allocator_type::reference;
using const_reference = typename allocator_type::const_reference;
using size_type = typename allocator_type::size_type;
using difference_type = typename allocator_type::difference_type;
using iterator = pointer;
using const_iterator = const_pointer;
using reverse_iterator = std::reverse_iterator<iterator>;
using const_reverse_iterator = std::reverse_iterator<const_iterator>;
using reference = T&;
using const_reference = const T&;
using pointer = T*;
using const_pointer = const T*;
using difference_type = ptrdiff_t;
using size_type = size_t;
static constexpr size_type inline_elements =
inlined == kFixedArrayUseDefault
? kInlineBytesDefault / sizeof(value_type)
: inlined;
FixedArray(const FixedArray& other) : rep_(other.begin(), other.end()) {}
FixedArray(FixedArray&& other) noexcept(
// clang-format off
absl::allocator_is_nothrow<std::allocator<value_type>>::value &&
// clang-format on
std::is_nothrow_move_constructible<value_type>::value)
: rep_(std::make_move_iterator(other.begin()),
std::make_move_iterator(other.end())) {}
(N == kFixedArrayUseDefault ? kInlineBytesDefault / sizeof(value_type)
: static_cast<size_type>(N));
FixedArray(
const FixedArray& other,
const allocator_type& a = allocator_type()) noexcept(NoexceptCopyable())
: FixedArray(other.begin(), other.end(), a) {}
FixedArray(
FixedArray&& other,
const allocator_type& a = allocator_type()) noexcept(NoexceptMovable())
: FixedArray(std::make_move_iterator(other.begin()),
std::make_move_iterator(other.end()), a) {}
// Creates an array object that can store `n` elements.
// Note that trivially constructible elements will be uninitialized.
explicit FixedArray(size_type n) : rep_(n) {}
explicit FixedArray(size_type n, const allocator_type& a = allocator_type())
: storage_(n, a) {
if (DefaultConstructorIsNonTrivial()) {
memory_internal::ConstructRange(storage_.alloc(), storage_.begin(),
storage_.end());
}
}
// Creates an array initialized with `n` copies of `val`.
FixedArray(size_type n, const value_type& val) : rep_(n, val) {}
FixedArray(size_type n, const value_type& val,
const allocator_type& a = allocator_type())
: storage_(n, a) {
memory_internal::ConstructRange(storage_.alloc(), storage_.begin(),
storage_.end(), val);
}
// Creates an array initialized with the size and contents of `init_list`.
FixedArray(std::initializer_list<value_type> init_list,
const allocator_type& a = allocator_type())
: FixedArray(init_list.begin(), init_list.end(), a) {}
// Creates an array initialized with the elements from the input
// range. The array's size will always be `std::distance(first, last)`.
// REQUIRES: Iter must be a forward_iterator or better.
template <typename Iter, EnableIfForwardIterator<Iter> = 0>
FixedArray(Iter first, Iter last) : rep_(first, last) {}
// Creates the array from an initializer_list.
FixedArray(std::initializer_list<T> init_list)
: FixedArray(init_list.begin(), init_list.end()) {}
// REQUIRES: Iterator must be a forward_iterator or better.
template <typename Iterator, EnableIfForwardIterator<Iterator>* = nullptr>
FixedArray(Iterator first, Iterator last,
const allocator_type& a = allocator_type())
: storage_(std::distance(first, last), a) {
memory_internal::CopyRange(storage_.alloc(), storage_.begin(), first, last);
}
~FixedArray() {}
~FixedArray() noexcept {
for (auto* cur = storage_.begin(); cur != storage_.end(); ++cur) {
AllocatorTraits::destroy(storage_.alloc(), cur);
}
}
// Assignments are deleted because they break the invariant that the size of a
// `FixedArray` never changes.
@ -145,7 +181,7 @@ class FixedArray {
// FixedArray::size()
//
// Returns the length of the fixed array.
size_type size() const { return rep_.size(); }
size_type size() const { return storage_.size(); }
// FixedArray::max_size()
//
@ -153,7 +189,7 @@ class FixedArray {
// `FixedArray<T>`. This is equivalent to the most possible addressable bytes
// over the number of bytes taken by T.
constexpr size_type max_size() const {
return std::numeric_limits<difference_type>::max() / sizeof(value_type);
return (std::numeric_limits<difference_type>::max)() / sizeof(value_type);
}
// FixedArray::empty()
@ -170,12 +206,12 @@ class FixedArray {
//
// Returns a const T* pointer to elements of the `FixedArray`. This pointer
// can be used to access (but not modify) the contained elements.
const_pointer data() const { return AsValue(rep_.begin()); }
const_pointer data() const { return AsValueType(storage_.begin()); }
// Overload of FixedArray::data() to return a T* pointer to elements of the
// fixed array. This pointer can be used to access and modify the contained
// elements.
pointer data() { return AsValue(rep_.begin()); }
pointer data() { return AsValueType(storage_.begin()); }
// FixedArray::operator[]
//
@ -295,7 +331,7 @@ class FixedArray {
// FixedArray::fill()
//
// Assigns the given `value` to all elements in the fixed array.
void fill(const T& value) { std::fill(begin(), end(), value); }
void fill(const value_type& val) { std::fill(begin(), end(), val); }
// Relational operators. Equality operators are elementwise using
// `operator==`, while order operators order FixedArrays lexicographically.
@ -324,18 +360,25 @@ class FixedArray {
return !(lhs < rhs);
}
template <typename H>
friend H AbslHashValue(H h, const FixedArray& v) {
return H::combine(H::combine_contiguous(std::move(h), v.data(), v.size()),
v.size());
}
private:
// HolderTraits
// StorageElement
//
// Wrapper to hold elements of type T for the case where T is an array type.
// If 'T' is an array type, HolderTraits::type is a struct with a 'T v;'.
// Otherwise, HolderTraits::type is simply 'T'.
// For FixedArrays with a C-style-array value_type, StorageElement is a POD
// wrapper struct called StorageElementWrapper that holds the value_type
// instance inside. This is needed for construction and destruction of the
// entire array regardless of how many dimensions it has. For all other cases,
// StorageElement is just an alias of value_type.
//
// Maintainer's Note: The simpler solution would be to simply wrap T in a
// struct whether it's an array or not: 'struct Holder { T v; };', but
// that causes some paranoid diagnostics to misfire about uses of data(),
// believing that 'data()' (aka '&rep_.begin().v') is a pointer to a single
// element, rather than the packed array that it really is.
// Maintainer's Note: The simpler solution would be to simply wrap value_type
// in a struct whether it's an array or not. That causes some paranoid
// diagnostics to misfire, believing that 'data()' returns a pointer to a
// single element, rather than the packed array that it really is.
// e.g.:
//
// FixedArray<char> buf(1);
@ -344,157 +387,134 @@ class FixedArray {
// error: call to int __builtin___sprintf_chk(etc...)
// will always overflow destination buffer [-Werror]
//
class HolderTraits {
template <typename U>
struct SelectImpl {
using type = U;
static pointer AsValue(type* p) { return p; }
};
// Partial specialization for elements of array type.
template <typename U, size_t N>
struct SelectImpl<U[N]> {
struct Holder { U v[N]; };
using type = Holder;
static pointer AsValue(type* p) { return &p->v; }
template <typename OuterT = value_type,
typename InnerT = absl::remove_extent_t<OuterT>,
size_t InnerN = std::extent<OuterT>::value>
struct StorageElementWrapper {
InnerT array[InnerN];
};
using Impl = SelectImpl<value_type>;
public:
using type = typename Impl::type;
static pointer AsValue(type *p) { return Impl::AsValue(p); }
using StorageElement =
absl::conditional_t<std::is_array<value_type>::value,
StorageElementWrapper<value_type>, value_type>;
using StorageElementBuffer =
absl::aligned_storage_t<sizeof(StorageElement), alignof(StorageElement)>;
// TODO(billydonahue): fix the type aliasing violation
// this assertion hints at.
static_assert(sizeof(type) == sizeof(value_type),
"Holder must be same size as value_type");
};
static pointer AsValueType(pointer ptr) { return ptr; }
static pointer AsValueType(StorageElementWrapper<value_type>* ptr) {
return std::addressof(ptr->array);
}
using Holder = typename HolderTraits::type;
static pointer AsValue(Holder *p) { return HolderTraits::AsValue(p); }
static_assert(sizeof(StorageElement) == sizeof(value_type), "");
static_assert(alignof(StorageElement) == alignof(value_type), "");
// InlineSpace
//
// Allocate some space, not an array of elements of type T, so that we can
// skip calling the T constructors and destructors for space we never use.
// How many elements should we store inline?
// a. If not specified, use a default of kInlineBytesDefault bytes (This is
// currently 256 bytes, which seems small enough to not cause stack overflow
// or unnecessary stack pollution, while still allowing stack allocation for
// reasonably long character arrays).
// b. Never use 0 length arrays (not ISO C++)
//
template <size_type N, typename = void>
class InlineSpace {
public:
Holder* data() { return reinterpret_cast<Holder*>(space_.data()); }
void AnnotateConstruct(size_t n) const { Annotate(n, true); }
void AnnotateDestruct(size_t n) const { Annotate(n, false); }
private:
#ifndef ADDRESS_SANITIZER
void Annotate(size_t, bool) const { }
#else
void Annotate(size_t n, bool creating) const {
if (!n) return;
const void* bot = &left_redzone_;
const void* beg = space_.data();
const void* end = space_.data() + n;
const void* top = &right_redzone_ + 1;
// args: (beg, end, old_mid, new_mid)
if (creating) {
ANNOTATE_CONTIGUOUS_CONTAINER(beg, top, top, end);
ANNOTATE_CONTIGUOUS_CONTAINER(bot, beg, beg, bot);
} else {
ANNOTATE_CONTIGUOUS_CONTAINER(beg, top, end, top);
ANNOTATE_CONTIGUOUS_CONTAINER(bot, beg, bot, beg);
}
struct NonEmptyInlinedStorage {
StorageElement* data() {
return reinterpret_cast<StorageElement*>(inlined_storage_.data());
}
#ifdef ADDRESS_SANITIZER
void* RedzoneBegin() { return &redzone_begin_; }
void* RedzoneEnd() { return &redzone_end_ + 1; }
#endif // ADDRESS_SANITIZER
using Buffer =
typename std::aligned_storage<sizeof(Holder), alignof(Holder)>::type;
void AnnotateConstruct(size_type);
void AnnotateDestruct(size_type);
ADDRESS_SANITIZER_REDZONE(left_redzone_);
std::array<Buffer, N> space_;
ADDRESS_SANITIZER_REDZONE(right_redzone_);
ADDRESS_SANITIZER_REDZONE(redzone_begin_);
std::array<StorageElementBuffer, inline_elements> inlined_storage_;
ADDRESS_SANITIZER_REDZONE(redzone_end_);
};
// specialization when N = 0.
template <typename U>
class InlineSpace<0, U> {
public:
Holder* data() { return nullptr; }
void AnnotateConstruct(size_t) const {}
void AnnotateDestruct(size_t) const {}
struct EmptyInlinedStorage {
StorageElement* data() { return nullptr; }
void AnnotateConstruct(size_type) {}
void AnnotateDestruct(size_type) {}
};
// Rep
using InlinedStorage =
absl::conditional_t<inline_elements == 0, EmptyInlinedStorage,
NonEmptyInlinedStorage>;
// Storage
//
// A const Rep object holds FixedArray's size and data pointer.
// An instance of Storage manages the inline and out-of-line memory for
// instances of FixedArray. This guarantees that even when construction of
// individual elements fails in the FixedArray constructor body, the
// destructor for Storage will still be called and out-of-line memory will be
// properly deallocated.
//
class Rep : public InlineSpace<inline_elements> {
class Storage : public InlinedStorage {
public:
Rep(size_type n, const value_type& val) : n_(n), p_(MakeHolder(n)) {
std::uninitialized_fill_n(p_, n, val);
}
Storage(size_type n, const allocator_type& a)
: size_alloc_(n, a), data_(InitializeData()) {}
explicit Rep(size_type n) : n_(n), p_(MakeHolder(n)) {
// Loop optimizes to nothing for trivially constructible T.
for (Holder* p = p_; p != p_ + n; ++p)
// Note: no parens: default init only.
// Also note '::' to avoid Holder class placement new operator.
::new (static_cast<void*>(p)) Holder;
~Storage() noexcept {
if (UsingInlinedStorage(size())) {
InlinedStorage::AnnotateDestruct(size());
} else {
AllocatorTraits::deallocate(alloc(), AsValueType(begin()), size());
}
template <typename Iter>
Rep(Iter first, Iter last)
: n_(std::distance(first, last)), p_(MakeHolder(n_)) {
std::uninitialized_copy(first, last, AsValue(p_));
}
~Rep() {
// Destruction must be in reverse order.
// Loop optimizes to nothing for trivially destructible T.
for (Holder* p = end(); p != begin();) (--p)->~Holder();
if (IsAllocated(size())) {
std::allocator<Holder>().deallocate(p_, n_);
} else {
this->AnnotateDestruct(size());
}
size_type size() const { return size_alloc_.template get<0>(); }
StorageElement* begin() const { return data_; }
StorageElement* end() const { return begin() + size(); }
allocator_type& alloc() {
return size_alloc_.template get<1>();
}
Holder* begin() const { return p_; }
Holder* end() const { return p_ + n_; }
size_type size() const { return n_; }
private:
Holder* MakeHolder(size_type n) {
if (IsAllocated(n)) {
return std::allocator<Holder>().allocate(n);
static bool UsingInlinedStorage(size_type n) {
return n <= inline_elements;
}
StorageElement* InitializeData() {
if (UsingInlinedStorage(size())) {
InlinedStorage::AnnotateConstruct(size());
return InlinedStorage::data();
} else {
this->AnnotateConstruct(n);
return this->data();
return reinterpret_cast<StorageElement*>(
AllocatorTraits::allocate(alloc(), size()));
}
}
bool IsAllocated(size_type n) const { return n > inline_elements; }
const size_type n_;
Holder* const p_;
// `CompressedTuple` takes advantage of EBCO for stateless `allocator_type`s
container_internal::CompressedTuple<size_type, allocator_type> size_alloc_;
StorageElement* data_;
};
// Data members
Rep rep_;
Storage storage_;
};
template <typename T, size_t N>
constexpr size_t FixedArray<T, N>::inline_elements;
template <typename T, size_t N, typename A>
constexpr size_t FixedArray<T, N, A>::kInlineBytesDefault;
template <typename T, size_t N>
constexpr size_t FixedArray<T, N>::kInlineBytesDefault;
template <typename T, size_t N, typename A>
constexpr typename FixedArray<T, N, A>::size_type
FixedArray<T, N, A>::inline_elements;
} // inline namespace lts_2018_06_20
template <typename T, size_t N, typename A>
void FixedArray<T, N, A>::NonEmptyInlinedStorage::AnnotateConstruct(
typename FixedArray<T, N, A>::size_type n) {
#ifdef ADDRESS_SANITIZER
if (!n) return;
ANNOTATE_CONTIGUOUS_CONTAINER(data(), RedzoneEnd(), RedzoneEnd(), data() + n);
ANNOTATE_CONTIGUOUS_CONTAINER(RedzoneBegin(), data(), data(), RedzoneBegin());
#endif // ADDRESS_SANITIZER
static_cast<void>(n); // Mark used when not in asan mode
}
template <typename T, size_t N, typename A>
void FixedArray<T, N, A>::NonEmptyInlinedStorage::AnnotateDestruct(
typename FixedArray<T, N, A>::size_type n) {
#ifdef ADDRESS_SANITIZER
if (!n) return;
ANNOTATE_CONTIGUOUS_CONTAINER(data(), RedzoneEnd(), data() + n, RedzoneEnd());
ANNOTATE_CONTIGUOUS_CONTAINER(RedzoneBegin(), data(), RedzoneBegin(), data());
#endif // ADDRESS_SANITIZER
static_cast<void>(n); // Mark used when not in asan mode
}
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_CONTAINER_FIXED_ARRAY_H_

@ -0,0 +1,119 @@
// Copyright 2017 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include <initializer_list>
#include "absl/container/fixed_array.h"
#include "gtest/gtest.h"
#include "absl/base/internal/exception_safety_testing.h"
namespace absl {
inline namespace lts_2018_12_18 {
namespace {
constexpr size_t kInlined = 25;
constexpr size_t kSmallSize = kInlined / 2;
constexpr size_t kLargeSize = kInlined * 2;
constexpr int kInitialValue = 5;
constexpr int kUpdatedValue = 10;
using ::testing::TestThrowingCtor;
using Thrower = testing::ThrowingValue<testing::TypeSpec::kEverythingThrows>;
using FixedArr = absl::FixedArray<Thrower, kInlined>;
using MoveThrower = testing::ThrowingValue<testing::TypeSpec::kNoThrowMove>;
using MoveFixedArr = absl::FixedArray<MoveThrower, kInlined>;
TEST(FixedArrayExceptionSafety, CopyConstructor) {
auto small = FixedArr(kSmallSize);
TestThrowingCtor<FixedArr>(small);
auto large = FixedArr(kLargeSize);
TestThrowingCtor<FixedArr>(large);
}
TEST(FixedArrayExceptionSafety, MoveConstructor) {
TestThrowingCtor<FixedArr>(FixedArr(kSmallSize));
TestThrowingCtor<FixedArr>(FixedArr(kLargeSize));
// TypeSpec::kNoThrowMove
TestThrowingCtor<MoveFixedArr>(MoveFixedArr(kSmallSize));
TestThrowingCtor<MoveFixedArr>(MoveFixedArr(kLargeSize));
}
TEST(FixedArrayExceptionSafety, SizeConstructor) {
TestThrowingCtor<FixedArr>(kSmallSize);
TestThrowingCtor<FixedArr>(kLargeSize);
}
TEST(FixedArrayExceptionSafety, SizeValueConstructor) {
TestThrowingCtor<FixedArr>(kSmallSize, Thrower());
TestThrowingCtor<FixedArr>(kLargeSize, Thrower());
}
TEST(FixedArrayExceptionSafety, IteratorConstructor) {
auto small = FixedArr(kSmallSize);
TestThrowingCtor<FixedArr>(small.begin(), small.end());
auto large = FixedArr(kLargeSize);
TestThrowingCtor<FixedArr>(large.begin(), large.end());
}
TEST(FixedArrayExceptionSafety, InitListConstructor) {
constexpr int small_inlined = 3;
using SmallFixedArr = absl::FixedArray<Thrower, small_inlined>;
TestThrowingCtor<SmallFixedArr>(std::initializer_list<Thrower>{});
// Test inlined allocation
TestThrowingCtor<SmallFixedArr>(
std::initializer_list<Thrower>{Thrower{}, Thrower{}});
// Test out of line allocation
TestThrowingCtor<SmallFixedArr>(std::initializer_list<Thrower>{
Thrower{}, Thrower{}, Thrower{}, Thrower{}, Thrower{}});
}
testing::AssertionResult ReadMemory(FixedArr* fixed_arr) {
// Marked volatile to prevent optimization. Used for running asan tests.
volatile int sum = 0;
for (const auto& thrower : *fixed_arr) {
sum += thrower.Get();
}
return testing::AssertionSuccess() << "Values sum to [" << sum << "]";
}
TEST(FixedArrayExceptionSafety, Fill) {
auto test_fill = testing::MakeExceptionSafetyTester()
.WithContracts(ReadMemory)
.WithOperation([&](FixedArr* fixed_arr_ptr) {
auto thrower =
Thrower(kUpdatedValue, testing::nothrow_ctor);
fixed_arr_ptr->fill(thrower);
});
EXPECT_TRUE(
test_fill.WithInitialValue(FixedArr(kSmallSize, Thrower(kInitialValue)))
.Test());
EXPECT_TRUE(
test_fill.WithInitialValue(FixedArr(kLargeSize, Thrower(kInitialValue)))
.Test());
}
} // namespace
} // inline namespace lts_2018_12_18
} // namespace absl

@ -15,9 +15,11 @@
#include "absl/container/fixed_array.h"
#include <stdio.h>
#include <cstring>
#include <list>
#include <memory>
#include <numeric>
#include <scoped_allocator>
#include <stdexcept>
#include <string>
#include <vector>
@ -25,6 +27,7 @@
#include "gmock/gmock.h"
#include "gtest/gtest.h"
#include "absl/base/internal/exception_testing.h"
#include "absl/hash/hash_testing.h"
#include "absl/memory/memory.h"
using ::testing::ElementsAreArray;
@ -607,6 +610,216 @@ TEST(FixedArrayTest, Fill) {
empty.fill(fill_val);
}
// TODO(johnsoncj): Investigate InlinedStorage default initialization in GCC 4.x
#ifndef __GNUC__
TEST(FixedArrayTest, DefaultCtorDoesNotValueInit) {
using T = char;
constexpr auto capacity = 10;
using FixedArrType = absl::FixedArray<T, capacity>;
using FixedArrBuffType =
absl::aligned_storage_t<sizeof(FixedArrType), alignof(FixedArrType)>;
constexpr auto scrubbed_bits = 0x95;
constexpr auto length = capacity / 2;
FixedArrBuffType buff;
std::memset(std::addressof(buff), scrubbed_bits, sizeof(FixedArrBuffType));
FixedArrType* arr =
::new (static_cast<void*>(std::addressof(buff))) FixedArrType(length);
EXPECT_THAT(*arr, testing::Each(scrubbed_bits));
arr->~FixedArrType();
}
#endif // __GNUC__
// This is a stateful allocator, but the state lives outside of the
// allocator (in whatever test is using the allocator). This is odd
// but helps in tests where the allocator is propagated into nested
// containers - that chain of allocators uses the same state and is
// thus easier to query for aggregate allocation information.
template <typename T>
class CountingAllocator : public std::allocator<T> {
public:
using Alloc = std::allocator<T>;
using pointer = typename Alloc::pointer;
using size_type = typename Alloc::size_type;
CountingAllocator() : bytes_used_(nullptr), instance_count_(nullptr) {}
explicit CountingAllocator(int64_t* b)
: bytes_used_(b), instance_count_(nullptr) {}
CountingAllocator(int64_t* b, int64_t* a)
: bytes_used_(b), instance_count_(a) {}
template <typename U>
explicit CountingAllocator(const CountingAllocator<U>& x)
: Alloc(x),
bytes_used_(x.bytes_used_),
instance_count_(x.instance_count_) {}
pointer allocate(size_type n, const void* const hint = nullptr) {
assert(bytes_used_ != nullptr);
*bytes_used_ += n * sizeof(T);
return Alloc::allocate(n, hint);
}
void deallocate(pointer p, size_type n) {
Alloc::deallocate(p, n);
assert(bytes_used_ != nullptr);
*bytes_used_ -= n * sizeof(T);
}
template <typename... Args>
void construct(pointer p, Args&&... args) {
Alloc::construct(p, absl::forward<Args>(args)...);
if (instance_count_) {
*instance_count_ += 1;
}
}
void destroy(pointer p) {
Alloc::destroy(p);
if (instance_count_) {
*instance_count_ -= 1;
}
}
template <typename U>
class rebind {
public:
using other = CountingAllocator<U>;
};
int64_t* bytes_used_;
int64_t* instance_count_;
};
TEST(AllocatorSupportTest, CountInlineAllocations) {
constexpr size_t inlined_size = 4;
using Alloc = CountingAllocator<int>;
using AllocFxdArr = absl::FixedArray<int, inlined_size, Alloc>;
int64_t allocated = 0;
int64_t active_instances = 0;
{
const int ia[] = {0, 1, 2, 3, 4, 5, 6, 7};
Alloc alloc(&allocated, &active_instances);
AllocFxdArr arr(ia, ia + inlined_size, alloc);
static_cast<void>(arr);
}
EXPECT_EQ(allocated, 0);
EXPECT_EQ(active_instances, 0);
}
TEST(AllocatorSupportTest, CountOutoflineAllocations) {
constexpr size_t inlined_size = 4;
using Alloc = CountingAllocator<int>;
using AllocFxdArr = absl::FixedArray<int, inlined_size, Alloc>;
int64_t allocated = 0;
int64_t active_instances = 0;
{
const int ia[] = {0, 1, 2, 3, 4, 5, 6, 7};
Alloc alloc(&allocated, &active_instances);
AllocFxdArr arr(ia, ia + ABSL_ARRAYSIZE(ia), alloc);
EXPECT_EQ(allocated, arr.size() * sizeof(int));
static_cast<void>(arr);
}
EXPECT_EQ(active_instances, 0);
}
TEST(AllocatorSupportTest, CountCopyInlineAllocations) {
constexpr size_t inlined_size = 4;
using Alloc = CountingAllocator<int>;
using AllocFxdArr = absl::FixedArray<int, inlined_size, Alloc>;
int64_t allocated1 = 0;
int64_t allocated2 = 0;
int64_t active_instances = 0;
Alloc alloc(&allocated1, &active_instances);
Alloc alloc2(&allocated2, &active_instances);
{
int initial_value = 1;
AllocFxdArr arr1(inlined_size / 2, initial_value, alloc);
EXPECT_EQ(allocated1, 0);
AllocFxdArr arr2(arr1, alloc2);
EXPECT_EQ(allocated2, 0);
static_cast<void>(arr1);
static_cast<void>(arr2);
}
EXPECT_EQ(active_instances, 0);
}
TEST(AllocatorSupportTest, CountCopyOutoflineAllocations) {
constexpr size_t inlined_size = 4;
using Alloc = CountingAllocator<int>;
using AllocFxdArr = absl::FixedArray<int, inlined_size, Alloc>;
int64_t allocated1 = 0;
int64_t allocated2 = 0;
int64_t active_instances = 0;
Alloc alloc(&allocated1, &active_instances);
Alloc alloc2(&allocated2, &active_instances);
{
int initial_value = 1;
AllocFxdArr arr1(inlined_size * 2, initial_value, alloc);
EXPECT_EQ(allocated1, arr1.size() * sizeof(int));
AllocFxdArr arr2(arr1, alloc2);
EXPECT_EQ(allocated2, inlined_size * 2 * sizeof(int));
static_cast<void>(arr1);
static_cast<void>(arr2);
}
EXPECT_EQ(active_instances, 0);
}
TEST(AllocatorSupportTest, SizeValAllocConstructor) {
using testing::AllOf;
using testing::Each;
using testing::SizeIs;
constexpr size_t inlined_size = 4;
using Alloc = CountingAllocator<int>;
using AllocFxdArr = absl::FixedArray<int, inlined_size, Alloc>;
{
auto len = inlined_size / 2;
auto val = 0;
int64_t allocated = 0;
AllocFxdArr arr(len, val, Alloc(&allocated));
EXPECT_EQ(allocated, 0);
EXPECT_THAT(arr, AllOf(SizeIs(len), Each(0)));
}
{
auto len = inlined_size * 2;
auto val = 0;
int64_t allocated = 0;
AllocFxdArr arr(len, val, Alloc(&allocated));
EXPECT_EQ(allocated, len * sizeof(int));
EXPECT_THAT(arr, AllOf(SizeIs(len), Each(0)));
}
}
#ifdef ADDRESS_SANITIZER
TEST(FixedArrayTest, AddressSanitizerAnnotations1) {
absl::FixedArray<int, 32> a(10);

@ -0,0 +1,582 @@
// Copyright 2018 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// -----------------------------------------------------------------------------
// File: flat_hash_map.h
// -----------------------------------------------------------------------------
//
// An `absl::flat_hash_map<K, V>` is an unordered associative container of
// unique keys and associated values designed to be a more efficient replacement
// for `std::unordered_map`. Like `unordered_map`, search, insertion, and
// deletion of map elements can be done as an `O(1)` operation. However,
// `flat_hash_map` (and other unordered associative containers known as the
// collection of Abseil "Swiss tables") contain other optimizations that result
// in both memory and computation advantages.
//
// In most cases, your default choice for a hash map should be a map of type
// `flat_hash_map`.
#ifndef ABSL_CONTAINER_FLAT_HASH_MAP_H_
#define ABSL_CONTAINER_FLAT_HASH_MAP_H_
#include <cstddef>
#include <new>
#include <type_traits>
#include <utility>
#include "absl/algorithm/container.h"
#include "absl/container/internal/container_memory.h"
#include "absl/container/internal/hash_function_defaults.h" // IWYU pragma: export
#include "absl/container/internal/raw_hash_map.h" // IWYU pragma: export
#include "absl/memory/memory.h"
namespace absl {
inline namespace lts_2018_12_18 {
namespace container_internal {
template <class K, class V>
struct FlatHashMapPolicy;
} // namespace container_internal
// -----------------------------------------------------------------------------
// absl::flat_hash_map
// -----------------------------------------------------------------------------
//
// An `absl::flat_hash_map<K, V>` is an unordered associative container which
// has been optimized for both speed and memory footprint in most common use
// cases. Its interface is similar to that of `std::unordered_map<K, V>` with
// the following notable differences:
//
// * Requires keys that are CopyConstructible
// * Requires values that are MoveConstructible
// * Supports heterogeneous lookup, through `find()`, `operator[]()` and
// `insert()`, provided that the map is provided a compatible heterogeneous
// hashing function and equality operator.
// * Invalidates any references and pointers to elements within the table after
// `rehash()`.
// * Contains a `capacity()` member function indicating the number of element
// slots (open, deleted, and empty) within the hash map.
// * Returns `void` from the `erase(iterator)` overload.
//
// By default, `flat_hash_map` uses the `absl::Hash` hashing framework.
// All fundamental and Abseil types that support the `absl::Hash` framework have
// a compatible equality operator for comparing insertions into `flat_hash_map`.
// If your type is not yet supported by the `absl::Hash` framework, see
// absl/hash/hash.h for information on extending Abseil hashing to user-defined
// types.
//
// NOTE: A `flat_hash_map` stores its value types directly inside its
// implementation array to avoid memory indirection. Because a `flat_hash_map`
// is designed to move data when rehashed, map values will not retain pointer
// stability. If you require pointer stability, or your values are large,
// consider using `absl::flat_hash_map<Key, std::unique_ptr<Value>>` instead.
// If your types are not moveable or you require pointer stability for keys,
// consider `absl::node_hash_map`.
//
// Example:
//
// // Create a flat hash map of three strings (that map to strings)
// absl::flat_hash_map<std::string, std::string> ducks =
// {{"a", "huey"}, {"b", "dewey"}, {"c", "louie"}};
//
// // Insert a new element into the flat hash map
// ducks.insert({"d", "donald"});
//
// // Force a rehash of the flat hash map
// ducks.rehash(0);
//
// // Find the element with the key "b"
// std::string search_key = "b";
// auto result = ducks.find(search_key);
// if (result != ducks.end()) {
// std::cout << "Result: " << result->second << std::endl;
// }
template <class K, class V,
class Hash = absl::container_internal::hash_default_hash<K>,
class Eq = absl::container_internal::hash_default_eq<K>,
class Allocator = std::allocator<std::pair<const K, V>>>
class flat_hash_map : public absl::container_internal::raw_hash_map<
absl::container_internal::FlatHashMapPolicy<K, V>,
Hash, Eq, Allocator> {
using Base = typename flat_hash_map::raw_hash_map;
public:
// Constructors and Assignment Operators
//
// A flat_hash_map supports the same overload set as `std::unordered_map`
// for construction and assignment:
//
// * Default constructor
//
// // No allocation for the table's elements is made.
// absl::flat_hash_map<int, std::string> map1;
//
// * Initializer List constructor
//
// absl::flat_hash_map<int, std::string> map2 =
// {{1, "huey"}, {2, "dewey"}, {3, "louie"},};
//
// * Copy constructor
//
// absl::flat_hash_map<int, std::string> map3(map2);
//
// * Copy assignment operator
//
// // Hash functor and Comparator are copied as well
// absl::flat_hash_map<int, std::string> map4;
// map4 = map3;
//
// * Move constructor
//
// // Move is guaranteed efficient
// absl::flat_hash_map<int, std::string> map5(std::move(map4));
//
// * Move assignment operator
//
// // May be efficient if allocators are compatible
// absl::flat_hash_map<int, std::string> map6;
// map6 = std::move(map5);
//
// * Range constructor
//
// std::vector<std::pair<int, std::string>> v = {{1, "a"}, {2, "b"}};
// absl::flat_hash_map<int, std::string> map7(v.begin(), v.end());
flat_hash_map() {}
using Base::Base;
// flat_hash_map::begin()
//
// Returns an iterator to the beginning of the `flat_hash_map`.
using Base::begin;
// flat_hash_map::cbegin()
//
// Returns a const iterator to the beginning of the `flat_hash_map`.
using Base::cbegin;
// flat_hash_map::cend()
//
// Returns a const iterator to the end of the `flat_hash_map`.
using Base::cend;
// flat_hash_map::end()
//
// Returns an iterator to the end of the `flat_hash_map`.
using Base::end;
// flat_hash_map::capacity()
//
// Returns the number of element slots (assigned, deleted, and empty)
// available within the `flat_hash_map`.
//
// NOTE: this member function is particular to `absl::flat_hash_map` and is
// not provided in the `std::unordered_map` API.
using Base::capacity;
// flat_hash_map::empty()
//
// Returns whether or not the `flat_hash_map` is empty.
using Base::empty;
// flat_hash_map::max_size()
//
// Returns the largest theoretical possible number of elements within a
// `flat_hash_map` under current memory constraints. This value can be thought
// of the largest value of `std::distance(begin(), end())` for a
// `flat_hash_map<K, V>`.
using Base::max_size;
// flat_hash_map::size()
//
// Returns the number of elements currently within the `flat_hash_map`.
using Base::size;
// flat_hash_map::clear()
//
// Removes all elements from the `flat_hash_map`. Invalidates any references,
// pointers, or iterators referring to contained elements.
//
// NOTE: this operation may shrink the underlying buffer. To avoid shrinking
// the underlying buffer call `erase(begin(), end())`.
using Base::clear;
// flat_hash_map::erase()
//
// Erases elements within the `flat_hash_map`. Erasing does not trigger a
// rehash. Overloads are listed below.
//
// void erase(const_iterator pos):
//
// Erases the element at `position` of the `flat_hash_map`, returning
// `void`.
//
// NOTE: this return behavior is different than that of STL containers in
// general and `std::unordered_map` in particular.
//
// iterator erase(const_iterator first, const_iterator last):
//
// Erases the elements in the open interval [`first`, `last`), returning an
// iterator pointing to `last`.
//
// size_type erase(const key_type& key):
//
// Erases the element with the matching key, if it exists.
using Base::erase;
// flat_hash_map::insert()
//
// Inserts an element of the specified value into the `flat_hash_map`,
// returning an iterator pointing to the newly inserted element, provided that
// an element with the given key does not already exist. If rehashing occurs
// due to the insertion, all iterators are invalidated. Overloads are listed
// below.
//
// std::pair<iterator,bool> insert(const init_type& value):
//
// Inserts a value into the `flat_hash_map`. Returns a pair consisting of an
// iterator to the inserted element (or to the element that prevented the
// insertion) and a bool denoting whether the insertion took place.
//
// std::pair<iterator,bool> insert(T&& value):
// std::pair<iterator,bool> insert(init_type&& value):
//
// Inserts a moveable value into the `flat_hash_map`. Returns a pair
// consisting of an iterator to the inserted element (or to the element that
// prevented the insertion) and a bool denoting whether the insertion took
// place.
//
// iterator insert(const_iterator hint, const init_type& value):
// iterator insert(const_iterator hint, T&& value):
// iterator insert(const_iterator hint, init_type&& value);
//
// Inserts a value, using the position of `hint` as a non-binding suggestion
// for where to begin the insertion search. Returns an iterator to the
// inserted element, or to the existing element that prevented the
// insertion.
//
// void insert(InputIterator first, InputIterator last):
//
// Inserts a range of values [`first`, `last`).
//
// NOTE: Although the STL does not specify which element may be inserted if
// multiple keys compare equivalently, for `flat_hash_map` we guarantee the
// first match is inserted.
//
// void insert(std::initializer_list<init_type> ilist):
//
// Inserts the elements within the initializer list `ilist`.
//
// NOTE: Although the STL does not specify which element may be inserted if
// multiple keys compare equivalently within the initializer list, for
// `flat_hash_map` we guarantee the first match is inserted.
using Base::insert;
// flat_hash_map::insert_or_assign()
//
// Inserts an element of the specified value into the `flat_hash_map` provided
// that a value with the given key does not already exist, or replaces it with
// the element value if a key for that value already exists, returning an
// iterator pointing to the newly inserted element. If rehashing occurs due
// to the insertion, all existing iterators are invalidated. Overloads are
// listed below.
//
// pair<iterator, bool> insert_or_assign(const init_type& k, T&& obj):
// pair<iterator, bool> insert_or_assign(init_type&& k, T&& obj):
//
// Inserts/Assigns (or moves) the element of the specified key into the
// `flat_hash_map`.
//
// iterator insert_or_assign(const_iterator hint,
// const init_type& k, T&& obj):
// iterator insert_or_assign(const_iterator hint, init_type&& k, T&& obj):
//
// Inserts/Assigns (or moves) the element of the specified key into the
// `flat_hash_map` using the position of `hint` as a non-binding suggestion
// for where to begin the insertion search.
using Base::insert_or_assign;
// flat_hash_map::emplace()
//
// Inserts an element of the specified value by constructing it in-place
// within the `flat_hash_map`, provided that no element with the given key
// already exists.
//
// The element may be constructed even if there already is an element with the
// key in the container, in which case the newly constructed element will be
// destroyed immediately. Prefer `try_emplace()` unless your key is not
// copyable or moveable.
//
// If rehashing occurs due to the insertion, all iterators are invalidated.
using Base::emplace;
// flat_hash_map::emplace_hint()
//
// Inserts an element of the specified value by constructing it in-place
// within the `flat_hash_map`, using the position of `hint` as a non-binding
// suggestion for where to begin the insertion search, and only inserts
// provided that no element with the given key already exists.
//
// The element may be constructed even if there already is an element with the
// key in the container, in which case the newly constructed element will be
// destroyed immediately. Prefer `try_emplace()` unless your key is not
// copyable or moveable.
//
// If rehashing occurs due to the insertion, all iterators are invalidated.
using Base::emplace_hint;
// flat_hash_map::try_emplace()
//
// Inserts an element of the specified value by constructing it in-place
// within the `flat_hash_map`, provided that no element with the given key
// already exists. Unlike `emplace()`, if an element with the given key
// already exists, we guarantee that no element is constructed.
//
// If rehashing occurs due to the insertion, all iterators are invalidated.
// Overloads are listed below.
//
// pair<iterator, bool> try_emplace(const key_type& k, Args&&... args):
// pair<iterator, bool> try_emplace(key_type&& k, Args&&... args):
//
// Inserts (via copy or move) the element of the specified key into the
// `flat_hash_map`.
//
// iterator try_emplace(const_iterator hint,
// const init_type& k, Args&&... args):
// iterator try_emplace(const_iterator hint, init_type&& k, Args&&... args):
//
// Inserts (via copy or move) the element of the specified key into the
// `flat_hash_map` using the position of `hint` as a non-binding suggestion
// for where to begin the insertion search.
using Base::try_emplace;
// flat_hash_map::extract()
//
// Extracts the indicated element, erasing it in the process, and returns it
// as a C++17-compatible node handle. Overloads are listed below.
//
// node_type extract(const_iterator position):
//
// Extracts the key,value pair of the element at the indicated position and
// returns a node handle owning that extracted data.
//
// node_type extract(const key_type& x):
//
// Extracts the key,value pair of the element with a key matching the passed
// key value and returns a node handle owning that extracted data. If the
// `flat_hash_map` does not contain an element with a matching key, this
// function returns an empty node handle.
using Base::extract;
// flat_hash_map::merge()
//
// Extracts elements from a given `source` flat hash map into this
// `flat_hash_map`. If the destination `flat_hash_map` already contains an
// element with an equivalent key, that element is not extracted.
using Base::merge;
// flat_hash_map::swap(flat_hash_map& other)
//
// Exchanges the contents of this `flat_hash_map` with those of the `other`
// flat hash map, avoiding invocation of any move, copy, or swap operations on
// individual elements.
//
// All iterators and references on the `flat_hash_map` remain valid, excepting
// for the past-the-end iterator, which is invalidated.
//
// `swap()` requires that the flat hash map's hashing and key equivalence
// functions be Swappable, and are exchaged using unqualified calls to
// non-member `swap()`. If the map's allocator has
// `std::allocator_traits<allocator_type>::propagate_on_container_swap::value`
// set to `true`, the allocators are also exchanged using an unqualified call
// to non-member `swap()`; otherwise, the allocators are not swapped.
using Base::swap;
// flat_hash_map::rehash(count)
//
// Rehashes the `flat_hash_map`, setting the number of slots to be at least
// the passed value. If the new number of slots increases the load factor more
// than the current maximum load factor
// (`count` < `size()` / `max_load_factor()`), then the new number of slots
// will be at least `size()` / `max_load_factor()`.
//
// To force a rehash, pass rehash(0).
//
// NOTE: unlike behavior in `std::unordered_map`, references are also
// invalidated upon a `rehash()`.
using Base::rehash;
// flat_hash_map::reserve(count)
//
// Sets the number of slots in the `flat_hash_map` to the number needed to
// accommodate at least `count` total elements without exceeding the current
// maximum load factor, and may rehash the container if needed.
using Base::reserve;
// flat_hash_map::at()
//
// Returns a reference to the mapped value of the element with key equivalent
// to the passed key.
using Base::at;
// flat_hash_map::contains()
//
// Determines whether an element with a key comparing equal to the given `key`
// exists within the `flat_hash_map`, returning `true` if so or `false`
// otherwise.
using Base::contains;
// flat_hash_map::count(const Key& key) const
//
// Returns the number of elements with a key comparing equal to the given
// `key` within the `flat_hash_map`. note that this function will return
// either `1` or `0` since duplicate keys are not allowed within a
// `flat_hash_map`.
using Base::count;
// flat_hash_map::equal_range()
//
// Returns a closed range [first, last], defined by a `std::pair` of two
// iterators, containing all elements with the passed key in the
// `flat_hash_map`.
using Base::equal_range;
// flat_hash_map::find()
//
// Finds an element with the passed `key` within the `flat_hash_map`.
using Base::find;
// flat_hash_map::operator[]()
//
// Returns a reference to the value mapped to the passed key within the
// `flat_hash_map`, performing an `insert()` if the key does not already
// exist.
//
// If an insertion occurs and results in a rehashing of the container, all
// iterators are invalidated. Otherwise iterators are not affected and
// references are not invalidated. Overloads are listed below.
//
// T& operator[](const Key& key):
//
// Inserts an init_type object constructed in-place if the element with the
// given key does not exist.
//
// T& operator[](Key&& key):
//
// Inserts an init_type object constructed in-place provided that an element
// with the given key does not exist.
using Base::operator[];
// flat_hash_map::bucket_count()
//
// Returns the number of "buckets" within the `flat_hash_map`. Note that
// because a flat hash map contains all elements within its internal storage,
// this value simply equals the current capacity of the `flat_hash_map`.
using Base::bucket_count;
// flat_hash_map::load_factor()
//
// Returns the current load factor of the `flat_hash_map` (the average number
// of slots occupied with a value within the hash map).
using Base::load_factor;
// flat_hash_map::max_load_factor()
//
// Manages the maximum load factor of the `flat_hash_map`. Overloads are
// listed below.
//
// float flat_hash_map::max_load_factor()
//
// Returns the current maximum load factor of the `flat_hash_map`.
//
// void flat_hash_map::max_load_factor(float ml)
//
// Sets the maximum load factor of the `flat_hash_map` to the passed value.
//
// NOTE: This overload is provided only for API compatibility with the STL;
// `flat_hash_map` will ignore any set load factor and manage its rehashing
// internally as an implementation detail.
using Base::max_load_factor;
// flat_hash_map::get_allocator()
//
// Returns the allocator function associated with this `flat_hash_map`.
using Base::get_allocator;
// flat_hash_map::hash_function()
//
// Returns the hashing function used to hash the keys within this
// `flat_hash_map`.
using Base::hash_function;
// flat_hash_map::key_eq()
//
// Returns the function used for comparing keys equality.
using Base::key_eq;
};
namespace container_internal {
template <class K, class V>
struct FlatHashMapPolicy {
using slot_type = container_internal::slot_type<K, V>;
using key_type = K;
using mapped_type = V;
using init_type = std::pair</*non const*/ key_type, mapped_type>;
template <class Allocator, class... Args>
static void construct(Allocator* alloc, slot_type* slot, Args&&... args) {
slot_type::construct(alloc, slot, std::forward<Args>(args)...);
}
template <class Allocator>
static void destroy(Allocator* alloc, slot_type* slot) {
slot_type::destroy(alloc, slot);
}
template <class Allocator>
static void transfer(Allocator* alloc, slot_type* new_slot,
slot_type* old_slot) {
slot_type::transfer(alloc, new_slot, old_slot);
}
template <class F, class... Args>
static decltype(absl::container_internal::DecomposePair(
std::declval<F>(), std::declval<Args>()...))
apply(F&& f, Args&&... args) {
return absl::container_internal::DecomposePair(std::forward<F>(f),
std::forward<Args>(args)...);
}
static size_t space_used(const slot_type*) { return 0; }
static std::pair<const K, V>& element(slot_type* slot) { return slot->value; }
static V& value(std::pair<const K, V>* kv) { return kv->second; }
static const V& value(const std::pair<const K, V>* kv) { return kv->second; }
};
} // namespace container_internal
namespace container_algorithm_internal {
// Specialization of trait in absl/algorithm/container.h
template <class Key, class T, class Hash, class KeyEqual, class Allocator>
struct IsUnorderedContainer<
absl::flat_hash_map<Key, T, Hash, KeyEqual, Allocator>> : std::true_type {};
} // namespace container_algorithm_internal
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_CONTAINER_FLAT_HASH_MAP_H_

@ -0,0 +1,243 @@
// Copyright 2018 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "absl/container/flat_hash_map.h"
#include "absl/container/internal/hash_generator_testing.h"
#include "absl/container/internal/unordered_map_constructor_test.h"
#include "absl/container/internal/unordered_map_lookup_test.h"
#include "absl/container/internal/unordered_map_modifiers_test.h"
#include "absl/types/any.h"
namespace absl {
inline namespace lts_2018_12_18 {
namespace container_internal {
namespace {
using ::absl::container_internal::hash_internal::Enum;
using ::absl::container_internal::hash_internal::EnumClass;
using ::testing::_;
using ::testing::Pair;
using ::testing::UnorderedElementsAre;
template <class K, class V>
using Map =
flat_hash_map<K, V, StatefulTestingHash, StatefulTestingEqual, Alloc<>>;
static_assert(!std::is_standard_layout<NonStandardLayout>(), "");
using MapTypes =
::testing::Types<Map<int, int>, Map<std::string, int>, Map<Enum, std::string>,
Map<EnumClass, int>, Map<int, NonStandardLayout>,
Map<NonStandardLayout, int>>;
INSTANTIATE_TYPED_TEST_CASE_P(FlatHashMap, ConstructorTest, MapTypes);
INSTANTIATE_TYPED_TEST_CASE_P(FlatHashMap, LookupTest, MapTypes);
INSTANTIATE_TYPED_TEST_CASE_P(FlatHashMap, ModifiersTest, MapTypes);
TEST(FlatHashMap, StandardLayout) {
struct Int {
explicit Int(size_t value) : value(value) {}
Int() : value(0) { ADD_FAILURE(); }
Int(const Int& other) : value(other.value) { ADD_FAILURE(); }
Int(Int&&) = default;
bool operator==(const Int& other) const { return value == other.value; }
size_t value;
};
static_assert(std::is_standard_layout<Int>(), "");
struct Hash {
size_t operator()(const Int& obj) const { return obj.value; }
};
// Verify that neither the key nor the value get default-constructed or
// copy-constructed.
{
flat_hash_map<Int, Int, Hash> m;
m.try_emplace(Int(1), Int(2));
m.try_emplace(Int(3), Int(4));
m.erase(Int(1));
m.rehash(2 * m.bucket_count());
}
{
flat_hash_map<Int, Int, Hash> m;
m.try_emplace(Int(1), Int(2));
m.try_emplace(Int(3), Int(4));
m.erase(Int(1));
m.clear();
}
}
// gcc becomes unhappy if this is inside the method, so pull it out here.
struct balast {};
TEST(FlatHashMap, IteratesMsan) {
// Because SwissTable randomizes on pointer addresses, we keep old tables
// around to ensure we don't reuse old memory.
std::vector<absl::flat_hash_map<int, balast>> garbage;
for (int i = 0; i < 100; ++i) {
absl::flat_hash_map<int, balast> t;
for (int j = 0; j < 100; ++j) {
t[j];
for (const auto& p : t) EXPECT_THAT(p, Pair(_, _));
}
garbage.push_back(std::move(t));
}
}
// Demonstration of the "Lazy Key" pattern. This uses heterogeneous insert to
// avoid creating expensive key elements when the item is already present in the
// map.
struct LazyInt {
explicit LazyInt(size_t value, int* tracker)
: value(value), tracker(tracker) {}
explicit operator size_t() const {
++*tracker;
return value;
}
size_t value;
int* tracker;
};
struct Hash {
using is_transparent = void;
int* tracker;
size_t operator()(size_t obj) const {
++*tracker;
return obj;
}
size_t operator()(const LazyInt& obj) const {
++*tracker;
return obj.value;
}
};
struct Eq {
using is_transparent = void;
bool operator()(size_t lhs, size_t rhs) const {
return lhs == rhs;
}
bool operator()(size_t lhs, const LazyInt& rhs) const {
return lhs == rhs.value;
}
};
TEST(FlatHashMap, LazyKeyPattern) {
// hashes are only guaranteed in opt mode, we use assertions to track internal
// state that can cause extra calls to hash.
int conversions = 0;
int hashes = 0;
flat_hash_map<size_t, size_t, Hash, Eq> m(0, Hash{&hashes});
m[LazyInt(1, &conversions)] = 1;
EXPECT_THAT(m, UnorderedElementsAre(Pair(1, 1)));
EXPECT_EQ(conversions, 1);
#ifdef NDEBUG
EXPECT_EQ(hashes, 1);
#endif
m[LazyInt(1, &conversions)] = 2;
EXPECT_THAT(m, UnorderedElementsAre(Pair(1, 2)));
EXPECT_EQ(conversions, 1);
#ifdef NDEBUG
EXPECT_EQ(hashes, 2);
#endif
m.try_emplace(LazyInt(2, &conversions), 3);
EXPECT_THAT(m, UnorderedElementsAre(Pair(1, 2), Pair(2, 3)));
EXPECT_EQ(conversions, 2);
#ifdef NDEBUG
EXPECT_EQ(hashes, 3);
#endif
m.try_emplace(LazyInt(2, &conversions), 4);
EXPECT_THAT(m, UnorderedElementsAre(Pair(1, 2), Pair(2, 3)));
EXPECT_EQ(conversions, 2);
#ifdef NDEBUG
EXPECT_EQ(hashes, 4);
#endif
}
TEST(FlatHashMap, BitfieldArgument) {
union {
int n : 1;
};
n = 0;
flat_hash_map<int, int> m;
m.erase(n);
m.count(n);
m.prefetch(n);
m.find(n);
m.contains(n);
m.equal_range(n);
m.insert_or_assign(n, n);
m.insert_or_assign(m.end(), n, n);
m.try_emplace(n);
m.try_emplace(m.end(), n);
m.at(n);
m[n];
}
TEST(FlatHashMap, MergeExtractInsert) {
// We can't test mutable keys, or non-copyable keys with flat_hash_map.
// Test that the nodes have the proper API.
absl::flat_hash_map<int, int> m = {{1, 7}, {2, 9}};
auto node = m.extract(1);
EXPECT_TRUE(node);
EXPECT_EQ(node.key(), 1);
EXPECT_EQ(node.mapped(), 7);
EXPECT_THAT(m, UnorderedElementsAre(Pair(2, 9)));
node.mapped() = 17;
m.insert(std::move(node));
EXPECT_THAT(m, UnorderedElementsAre(Pair(1, 17), Pair(2, 9)));
}
#if !defined(__ANDROID__) && !defined(__APPLE__) && !defined(__EMSCRIPTEN__)
TEST(FlatHashMap, Any) {
absl::flat_hash_map<int, absl::any> m;
m.emplace(1, 7);
auto it = m.find(1);
ASSERT_NE(it, m.end());
EXPECT_EQ(7, absl::any_cast<int>(it->second));
m.emplace(std::piecewise_construct, std::make_tuple(2), std::make_tuple(8));
it = m.find(2);
ASSERT_NE(it, m.end());
EXPECT_EQ(8, absl::any_cast<int>(it->second));
m.emplace(std::piecewise_construct, std::make_tuple(3),
std::make_tuple(absl::any(9)));
it = m.find(3);
ASSERT_NE(it, m.end());
EXPECT_EQ(9, absl::any_cast<int>(it->second));
struct H {
size_t operator()(const absl::any&) const { return 0; }
};
struct E {
bool operator()(const absl::any&, const absl::any&) const { return true; }
};
absl::flat_hash_map<absl::any, int, H, E> m2;
m2.emplace(1, 7);
auto it2 = m2.find(1);
ASSERT_NE(it2, m2.end());
EXPECT_EQ(7, it2->second);
}
#endif // __ANDROID__
} // namespace
} // namespace container_internal
} // inline namespace lts_2018_12_18
} // namespace absl

@ -0,0 +1,491 @@
// Copyright 2018 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// -----------------------------------------------------------------------------
// File: flat_hash_set.h
// -----------------------------------------------------------------------------
//
// An `absl::flat_hash_set<T>` is an unordered associative container designed to
// be a more efficient replacement for `std::unordered_set`. Like
// `unordered_set`, search, insertion, and deletion of set elements can be done
// as an `O(1)` operation. However, `flat_hash_set` (and other unordered
// associative containers known as the collection of Abseil "Swiss tables")
// contain other optimizations that result in both memory and computation
// advantages.
//
// In most cases, your default choice for a hash set should be a set of type
// `flat_hash_set`.
#ifndef ABSL_CONTAINER_FLAT_HASH_SET_H_
#define ABSL_CONTAINER_FLAT_HASH_SET_H_
#include <type_traits>
#include <utility>
#include "absl/algorithm/container.h"
#include "absl/base/macros.h"
#include "absl/container/internal/container_memory.h"
#include "absl/container/internal/hash_function_defaults.h" // IWYU pragma: export
#include "absl/container/internal/raw_hash_set.h" // IWYU pragma: export
#include "absl/memory/memory.h"
namespace absl {
inline namespace lts_2018_12_18 {
namespace container_internal {
template <typename T>
struct FlatHashSetPolicy;
} // namespace container_internal
// -----------------------------------------------------------------------------
// absl::flat_hash_set
// -----------------------------------------------------------------------------
//
// An `absl::flat_hash_set<T>` is an unordered associative container which has
// been optimized for both speed and memory footprint in most common use cases.
// Its interface is similar to that of `std::unordered_set<T>` with the
// following notable differences:
//
// * Requires keys that are CopyConstructible
// * Supports heterogeneous lookup, through `find()`, `operator[]()` and
// `insert()`, provided that the set is provided a compatible heterogeneous
// hashing function and equality operator.
// * Invalidates any references and pointers to elements within the table after
// `rehash()`.
// * Contains a `capacity()` member function indicating the number of element
// slots (open, deleted, and empty) within the hash set.
// * Returns `void` from the `erase(iterator)` overload.
//
// By default, `flat_hash_set` uses the `absl::Hash` hashing framework. All
// fundamental and Abseil types that support the `absl::Hash` framework have a
// compatible equality operator for comparing insertions into `flat_hash_map`.
// If your type is not yet supported by the `absl::Hash` framework, see
// absl/hash/hash.h for information on extending Abseil hashing to user-defined
// types.
//
// NOTE: A `flat_hash_set` stores its keys directly inside its implementation
// array to avoid memory indirection. Because a `flat_hash_set` is designed to
// move data when rehashed, set keys will not retain pointer stability. If you
// require pointer stability, consider using
// `absl::flat_hash_set<std::unique_ptr<T>>`. If your type is not moveable and
// you require pointer stability, consider `absl::node_hash_set` instead.
//
// Example:
//
// // Create a flat hash set of three strings
// absl::flat_hash_set<std::string> ducks =
// {"huey", "dewey", "louie"};
//
// // Insert a new element into the flat hash set
// ducks.insert("donald");
//
// // Force a rehash of the flat hash set
// ducks.rehash(0);
//
// // See if "dewey" is present
// if (ducks.contains("dewey")) {
// std::cout << "We found dewey!" << std::endl;
// }
template <class T, class Hash = absl::container_internal::hash_default_hash<T>,
class Eq = absl::container_internal::hash_default_eq<T>,
class Allocator = std::allocator<T>>
class flat_hash_set
: public absl::container_internal::raw_hash_set<
absl::container_internal::FlatHashSetPolicy<T>, Hash, Eq, Allocator> {
using Base = typename flat_hash_set::raw_hash_set;
public:
// Constructors and Assignment Operators
//
// A flat_hash_set supports the same overload set as `std::unordered_map`
// for construction and assignment:
//
// * Default constructor
//
// // No allocation for the table's elements is made.
// absl::flat_hash_set<std::string> set1;
//
// * Initializer List constructor
//
// absl::flat_hash_set<std::string> set2 =
// {{"huey"}, {"dewey"}, {"louie"},};
//
// * Copy constructor
//
// absl::flat_hash_set<std::string> set3(set2);
//
// * Copy assignment operator
//
// // Hash functor and Comparator are copied as well
// absl::flat_hash_set<std::string> set4;
// set4 = set3;
//
// * Move constructor
//
// // Move is guaranteed efficient
// absl::flat_hash_set<std::string> set5(std::move(set4));
//
// * Move assignment operator
//
// // May be efficient if allocators are compatible
// absl::flat_hash_set<std::string> set6;
// set6 = std::move(set5);
//
// * Range constructor
//
// std::vector<std::string> v = {"a", "b"};
// absl::flat_hash_set<std::string> set7(v.begin(), v.end());
flat_hash_set() {}
using Base::Base;
// flat_hash_set::begin()
//
// Returns an iterator to the beginning of the `flat_hash_set`.
using Base::begin;
// flat_hash_set::cbegin()
//
// Returns a const iterator to the beginning of the `flat_hash_set`.
using Base::cbegin;
// flat_hash_set::cend()
//
// Returns a const iterator to the end of the `flat_hash_set`.
using Base::cend;
// flat_hash_set::end()
//
// Returns an iterator to the end of the `flat_hash_set`.
using Base::end;
// flat_hash_set::capacity()
//
// Returns the number of element slots (assigned, deleted, and empty)
// available within the `flat_hash_set`.
//
// NOTE: this member function is particular to `absl::flat_hash_set` and is
// not provided in the `std::unordered_map` API.
using Base::capacity;
// flat_hash_set::empty()
//
// Returns whether or not the `flat_hash_set` is empty.
using Base::empty;
// flat_hash_set::max_size()
//
// Returns the largest theoretical possible number of elements within a
// `flat_hash_set` under current memory constraints. This value can be thought
// of the largest value of `std::distance(begin(), end())` for a
// `flat_hash_set<T>`.
using Base::max_size;
// flat_hash_set::size()
//
// Returns the number of elements currently within the `flat_hash_set`.
using Base::size;
// flat_hash_set::clear()
//
// Removes all elements from the `flat_hash_set`. Invalidates any references,
// pointers, or iterators referring to contained elements.
//
// NOTE: this operation may shrink the underlying buffer. To avoid shrinking
// the underlying buffer call `erase(begin(), end())`.
using Base::clear;
// flat_hash_set::erase()
//
// Erases elements within the `flat_hash_set`. Erasing does not trigger a
// rehash. Overloads are listed below.
//
// void erase(const_iterator pos):
//
// Erases the element at `position` of the `flat_hash_set`, returning
// `void`.
//
// NOTE: this return behavior is different than that of STL containers in
// general and `std::unordered_map` in particular.
//
// iterator erase(const_iterator first, const_iterator last):
//
// Erases the elements in the open interval [`first`, `last`), returning an
// iterator pointing to `last`.
//
// size_type erase(const key_type& key):
//
// Erases the element with the matching key, if it exists.
using Base::erase;
// flat_hash_set::insert()
//
// Inserts an element of the specified value into the `flat_hash_set`,
// returning an iterator pointing to the newly inserted element, provided that
// an element with the given key does not already exist. If rehashing occurs
// due to the insertion, all iterators are invalidated. Overloads are listed
// below.
//
// std::pair<iterator,bool> insert(const T& value):
//
// Inserts a value into the `flat_hash_set`. Returns a pair consisting of an
// iterator to the inserted element (or to the element that prevented the
// insertion) and a bool denoting whether the insertion took place.
//
// std::pair<iterator,bool> insert(T&& value):
//
// Inserts a moveable value into the `flat_hash_set`. Returns a pair
// consisting of an iterator to the inserted element (or to the element that
// prevented the insertion) and a bool denoting whether the insertion took
// place.
//
// iterator insert(const_iterator hint, const T& value):
// iterator insert(const_iterator hint, T&& value):
//
// Inserts a value, using the position of `hint` as a non-binding suggestion
// for where to begin the insertion search. Returns an iterator to the
// inserted element, or to the existing element that prevented the
// insertion.
//
// void insert(InputIterator first, InputIterator last):
//
// Inserts a range of values [`first`, `last`).
//
// NOTE: Although the STL does not specify which element may be inserted if
// multiple keys compare equivalently, for `flat_hash_set` we guarantee the
// first match is inserted.
//
// void insert(std::initializer_list<T> ilist):
//
// Inserts the elements within the initializer list `ilist`.
//
// NOTE: Although the STL does not specify which element may be inserted if
// multiple keys compare equivalently within the initializer list, for
// `flat_hash_set` we guarantee the first match is inserted.
using Base::insert;
// flat_hash_set::emplace()
//
// Inserts an element of the specified value by constructing it in-place
// within the `flat_hash_set`, provided that no element with the given key
// already exists.
//
// The element may be constructed even if there already is an element with the
// key in the container, in which case the newly constructed element will be
// destroyed immediately.
//
// If rehashing occurs due to the insertion, all iterators are invalidated.
using Base::emplace;
// flat_hash_set::emplace_hint()
//
// Inserts an element of the specified value by constructing it in-place
// within the `flat_hash_set`, using the position of `hint` as a non-binding
// suggestion for where to begin the insertion search, and only inserts
// provided that no element with the given key already exists.
//
// The element may be constructed even if there already is an element with the
// key in the container, in which case the newly constructed element will be
// destroyed immediately.
//
// If rehashing occurs due to the insertion, all iterators are invalidated.
using Base::emplace_hint;
// flat_hash_set::extract()
//
// Extracts the indicated element, erasing it in the process, and returns it
// as a C++17-compatible node handle. Overloads are listed below.
//
// node_type extract(const_iterator position):
//
// Extracts the element at the indicated position and returns a node handle
// owning that extracted data.
//
// node_type extract(const key_type& x):
//
// Extracts the element with the key matching the passed key value and
// returns a node handle owning that extracted data. If the `flat_hash_set`
// does not contain an element with a matching key, this function returns an
// empty node handle.
using Base::extract;
// flat_hash_set::merge()
//
// Extracts elements from a given `source` flat hash map into this
// `flat_hash_set`. If the destination `flat_hash_set` already contains an
// element with an equivalent key, that element is not extracted.
using Base::merge;
// flat_hash_set::swap(flat_hash_set& other)
//
// Exchanges the contents of this `flat_hash_set` with those of the `other`
// flat hash map, avoiding invocation of any move, copy, or swap operations on
// individual elements.
//
// All iterators and references on the `flat_hash_set` remain valid, excepting
// for the past-the-end iterator, which is invalidated.
//
// `swap()` requires that the flat hash set's hashing and key equivalence
// functions be Swappable, and are exchaged using unqualified calls to
// non-member `swap()`. If the map's allocator has
// `std::allocator_traits<allocator_type>::propagate_on_container_swap::value`
// set to `true`, the allocators are also exchanged using an unqualified call
// to non-member `swap()`; otherwise, the allocators are not swapped.
using Base::swap;
// flat_hash_set::rehash(count)
//
// Rehashes the `flat_hash_set`, setting the number of slots to be at least
// the passed value. If the new number of slots increases the load factor more
// than the current maximum load factor
// (`count` < `size()` / `max_load_factor()`), then the new number of slots
// will be at least `size()` / `max_load_factor()`.
//
// To force a rehash, pass rehash(0).
//
// NOTE: unlike behavior in `std::unordered_set`, references are also
// invalidated upon a `rehash()`.
using Base::rehash;
// flat_hash_set::reserve(count)
//
// Sets the number of slots in the `flat_hash_set` to the number needed to
// accommodate at least `count` total elements without exceeding the current
// maximum load factor, and may rehash the container if needed.
using Base::reserve;
// flat_hash_set::contains()
//
// Determines whether an element comparing equal to the given `key` exists
// within the `flat_hash_set`, returning `true` if so or `false` otherwise.
using Base::contains;
// flat_hash_set::count(const Key& key) const
//
// Returns the number of elements comparing equal to the given `key` within
// the `flat_hash_set`. note that this function will return either `1` or `0`
// since duplicate elements are not allowed within a `flat_hash_set`.
using Base::count;
// flat_hash_set::equal_range()
//
// Returns a closed range [first, last], defined by a `std::pair` of two
// iterators, containing all elements with the passed key in the
// `flat_hash_set`.
using Base::equal_range;
// flat_hash_set::find()
//
// Finds an element with the passed `key` within the `flat_hash_set`.
using Base::find;
// flat_hash_set::bucket_count()
//
// Returns the number of "buckets" within the `flat_hash_set`. Note that
// because a flat hash map contains all elements within its internal storage,
// this value simply equals the current capacity of the `flat_hash_set`.
using Base::bucket_count;
// flat_hash_set::load_factor()
//
// Returns the current load factor of the `flat_hash_set` (the average number
// of slots occupied with a value within the hash map).
using Base::load_factor;
// flat_hash_set::max_load_factor()
//
// Manages the maximum load factor of the `flat_hash_set`. Overloads are
// listed below.
//
// float flat_hash_set::max_load_factor()
//
// Returns the current maximum load factor of the `flat_hash_set`.
//
// void flat_hash_set::max_load_factor(float ml)
//
// Sets the maximum load factor of the `flat_hash_set` to the passed value.
//
// NOTE: This overload is provided only for API compatibility with the STL;
// `flat_hash_set` will ignore any set load factor and manage its rehashing
// internally as an implementation detail.
using Base::max_load_factor;
// flat_hash_set::get_allocator()
//
// Returns the allocator function associated with this `flat_hash_set`.
using Base::get_allocator;
// flat_hash_set::hash_function()
//
// Returns the hashing function used to hash the keys within this
// `flat_hash_set`.
using Base::hash_function;
// flat_hash_set::key_eq()
//
// Returns the function used for comparing keys equality.
using Base::key_eq;
};
namespace container_internal {
template <class T>
struct FlatHashSetPolicy {
using slot_type = T;
using key_type = T;
using init_type = T;
using constant_iterators = std::true_type;
template <class Allocator, class... Args>
static void construct(Allocator* alloc, slot_type* slot, Args&&... args) {
absl::allocator_traits<Allocator>::construct(*alloc, slot,
std::forward<Args>(args)...);
}
template <class Allocator>
static void destroy(Allocator* alloc, slot_type* slot) {
absl::allocator_traits<Allocator>::destroy(*alloc, slot);
}
template <class Allocator>
static void transfer(Allocator* alloc, slot_type* new_slot,
slot_type* old_slot) {
construct(alloc, new_slot, std::move(*old_slot));
destroy(alloc, old_slot);
}
static T& element(slot_type* slot) { return *slot; }
template <class F, class... Args>
static decltype(absl::container_internal::DecomposeValue(
std::declval<F>(), std::declval<Args>()...))
apply(F&& f, Args&&... args) {
return absl::container_internal::DecomposeValue(
std::forward<F>(f), std::forward<Args>(args)...);
}
static size_t space_used(const T*) { return 0; }
};
} // namespace container_internal
namespace container_algorithm_internal {
// Specialization of trait in absl/algorithm/container.h
template <class Key, class Hash, class KeyEqual, class Allocator>
struct IsUnorderedContainer<absl::flat_hash_set<Key, Hash, KeyEqual, Allocator>>
: std::true_type {};
} // namespace container_algorithm_internal
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_CONTAINER_FLAT_HASH_SET_H_

@ -0,0 +1,128 @@
// Copyright 2018 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "absl/container/flat_hash_set.h"
#include <vector>
#include "absl/container/internal/hash_generator_testing.h"
#include "absl/container/internal/unordered_set_constructor_test.h"
#include "absl/container/internal/unordered_set_lookup_test.h"
#include "absl/container/internal/unordered_set_modifiers_test.h"
#include "absl/memory/memory.h"
#include "absl/strings/string_view.h"
namespace absl {
inline namespace lts_2018_12_18 {
namespace container_internal {
namespace {
using ::absl::container_internal::hash_internal::Enum;
using ::absl::container_internal::hash_internal::EnumClass;
using ::testing::Pointee;
using ::testing::UnorderedElementsAre;
using ::testing::UnorderedElementsAreArray;
template <class T>
using Set =
absl::flat_hash_set<T, StatefulTestingHash, StatefulTestingEqual, Alloc<T>>;
using SetTypes =
::testing::Types<Set<int>, Set<std::string>, Set<Enum>, Set<EnumClass>>;
INSTANTIATE_TYPED_TEST_CASE_P(FlatHashSet, ConstructorTest, SetTypes);
INSTANTIATE_TYPED_TEST_CASE_P(FlatHashSet, LookupTest, SetTypes);
INSTANTIATE_TYPED_TEST_CASE_P(FlatHashSet, ModifiersTest, SetTypes);
TEST(FlatHashSet, EmplaceString) {
std::vector<std::string> v = {"a", "b"};
absl::flat_hash_set<absl::string_view> hs(v.begin(), v.end());
EXPECT_THAT(hs, UnorderedElementsAreArray(v));
}
TEST(FlatHashSet, BitfieldArgument) {
union {
int n : 1;
};
n = 0;
absl::flat_hash_set<int> s = {n};
s.insert(n);
s.insert(s.end(), n);
s.insert({n});
s.erase(n);
s.count(n);
s.prefetch(n);
s.find(n);
s.contains(n);
s.equal_range(n);
}
TEST(FlatHashSet, MergeExtractInsert) {
struct Hash {
size_t operator()(const std::unique_ptr<int>& p) const { return *p; }
};
struct Eq {
bool operator()(const std::unique_ptr<int>& a,
const std::unique_ptr<int>& b) const {
return *a == *b;
}
};
absl::flat_hash_set<std::unique_ptr<int>, Hash, Eq> set1, set2;
set1.insert(absl::make_unique<int>(7));
set1.insert(absl::make_unique<int>(17));
set2.insert(absl::make_unique<int>(7));
set2.insert(absl::make_unique<int>(19));
EXPECT_THAT(set1, UnorderedElementsAre(Pointee(7), Pointee(17)));
EXPECT_THAT(set2, UnorderedElementsAre(Pointee(7), Pointee(19)));
set1.merge(set2);
EXPECT_THAT(set1, UnorderedElementsAre(Pointee(7), Pointee(17), Pointee(19)));
EXPECT_THAT(set2, UnorderedElementsAre(Pointee(7)));
auto node = set1.extract(absl::make_unique<int>(7));
EXPECT_TRUE(node);
EXPECT_THAT(node.value(), Pointee(7));
EXPECT_THAT(set1, UnorderedElementsAre(Pointee(17), Pointee(19)));
auto insert_result = set2.insert(std::move(node));
EXPECT_FALSE(node);
EXPECT_FALSE(insert_result.inserted);
EXPECT_TRUE(insert_result.node);
EXPECT_THAT(insert_result.node.value(), Pointee(7));
EXPECT_EQ(**insert_result.position, 7);
EXPECT_NE(insert_result.position->get(), insert_result.node.value().get());
EXPECT_THAT(set2, UnorderedElementsAre(Pointee(7)));
node = set1.extract(absl::make_unique<int>(17));
EXPECT_TRUE(node);
EXPECT_THAT(node.value(), Pointee(17));
EXPECT_THAT(set1, UnorderedElementsAre(Pointee(19)));
node.value() = absl::make_unique<int>(23);
insert_result = set2.insert(std::move(node));
EXPECT_FALSE(node);
EXPECT_TRUE(insert_result.inserted);
EXPECT_FALSE(insert_result.node);
EXPECT_EQ(**insert_result.position, 23);
EXPECT_THAT(set2, UnorderedElementsAre(Pointee(7), Pointee(23)));
}
} // namespace
} // namespace container_internal
} // inline namespace lts_2018_12_18
} // namespace absl

File diff suppressed because it is too large Load Diff

@ -66,7 +66,7 @@ BENCHMARK(BM_StdVectorFill)->Range(0, 1024);
// The purpose of the next two benchmarks is to verify that
// absl::InlinedVector is efficient when moving is more efficent than
// copying. To do so, we use strings that are larger than the short
// std::string optimization.
// string optimization.
bool StringRepresentedInline(std::string s) {
const char* chars = s.data();
std::string s1 = std::move(s);

@ -31,6 +31,7 @@
#include "absl/base/internal/raw_logging.h"
#include "absl/base/macros.h"
#include "absl/container/internal/test_instance_tracker.h"
#include "absl/hash/hash_testing.h"
#include "absl/memory/memory.h"
#include "absl/strings/str_cat.h"
@ -905,6 +906,8 @@ TYPED_TEST_P(InstanceTest, Swap) {
InstanceTracker tracker;
InstanceVec a, b;
const size_t inlined_capacity = a.capacity();
auto min_len = std::min(l1, l2);
auto max_len = std::max(l1, l2);
for (int i = 0; i < l1; i++) a.push_back(Instance(i));
for (int i = 0; i < l2; i++) b.push_back(Instance(100+i));
EXPECT_EQ(tracker.instances(), l1 + l2);
@ -918,15 +921,15 @@ TYPED_TEST_P(InstanceTest, Swap) {
EXPECT_EQ(tracker.swaps(), 0); // Allocations are swapped.
EXPECT_EQ(tracker.moves(), 0);
} else if (a.size() <= inlined_capacity && b.size() <= inlined_capacity) {
EXPECT_EQ(tracker.swaps(), std::min(l1, l2));
// TODO(bsamwel): This should use moves when the type is movable.
EXPECT_EQ(tracker.copies(), std::max(l1, l2) - std::min(l1, l2));
EXPECT_EQ(tracker.swaps(), min_len);
EXPECT_EQ((tracker.moves() ? tracker.moves() : tracker.copies()),
max_len - min_len);
} else {
// One is allocated and the other isn't. The allocation is transferred
// without copying elements, and the inlined instances are copied/moved.
EXPECT_EQ(tracker.swaps(), 0);
// TODO(bsamwel): This should use moves when the type is movable.
EXPECT_EQ(tracker.copies(), std::min(l1, l2));
EXPECT_EQ((tracker.moves() ? tracker.moves() : tracker.copies()),
min_len);
}
EXPECT_EQ(l1, b.size());
@ -1725,42 +1728,87 @@ TEST(AllocatorSupportTest, ScopedAllocatorWorks) {
std::scoped_allocator_adaptor<CountingAllocator<StdVector>>;
using AllocVec = absl::InlinedVector<StdVector, 4, MyAlloc>;
// MSVC 2017's std::vector allocates different amounts of memory in debug
// versus opt mode.
int64_t test_allocated = 0;
StdVector v(CountingAllocator<int>{&test_allocated});
// The amount of memory allocated by a default constructed vector<int>
auto default_std_vec_allocated = test_allocated;
v.push_back(1);
// The amound of memory allocated by a copy-constructed vector<int> with one
// element.
int64_t one_element_std_vec_copy_allocated = test_allocated;
int64_t allocated = 0;
AllocVec vec(MyAlloc{CountingAllocator<StdVector>{&allocated}});
EXPECT_EQ(allocated, 0);
// This default constructs a vector<int>, but the allocator should pass itself
// into the vector<int>.
// into the vector<int>, so check allocation compared to that.
// The absl::InlinedVector does not allocate any memory.
// The vector<int> does not allocate any memory.
// The vector<int> may allocate any memory.
auto expected = default_std_vec_allocated;
vec.resize(1);
EXPECT_EQ(allocated, 0);
EXPECT_EQ(allocated, expected);
// We make vector<int> allocate memory.
// It must go through the allocator even though we didn't construct the
// vector directly.
// vector directly. This assumes that vec[0] doesn't need to grow its
// allocation.
expected += sizeof(int);
vec[0].push_back(1);
EXPECT_EQ(allocated, sizeof(int) * 1);
EXPECT_EQ(allocated, expected);
// Another allocating vector.
expected += one_element_std_vec_copy_allocated;
vec.push_back(vec[0]);
EXPECT_EQ(allocated, sizeof(int) * 2);
EXPECT_EQ(allocated, expected);
// Overflow the inlined memory.
// The absl::InlinedVector will now allocate.
expected += sizeof(StdVector) * 8 + default_std_vec_allocated * 3;
vec.resize(5);
EXPECT_EQ(allocated, sizeof(int) * 2 + sizeof(StdVector) * 8);
EXPECT_EQ(allocated, expected);
// Adding one more in external mode should also work.
expected += one_element_std_vec_copy_allocated;
vec.push_back(vec[0]);
EXPECT_EQ(allocated, sizeof(int) * 3 + sizeof(StdVector) * 8);
EXPECT_EQ(allocated, expected);
// And extending these should still work.
// And extending these should still work. This assumes that vec[0] does not
// need to grow its allocation.
expected += sizeof(int);
vec[0].push_back(1);
EXPECT_EQ(allocated, sizeof(int) * 4 + sizeof(StdVector) * 8);
EXPECT_EQ(allocated, expected);
vec.clear();
EXPECT_EQ(allocated, 0);
}
TEST(AllocatorSupportTest, SizeAllocConstructor) {
constexpr int inlined_size = 4;
using Alloc = CountingAllocator<int>;
using AllocVec = absl::InlinedVector<int, inlined_size, Alloc>;
{
auto len = inlined_size / 2;
int64_t allocated = 0;
auto v = AllocVec(len, Alloc(&allocated));
// Inline storage used; allocator should not be invoked
EXPECT_THAT(allocated, 0);
EXPECT_THAT(v, AllOf(SizeIs(len), Each(0)));
}
{
auto len = inlined_size * 2;
int64_t allocated = 0;
auto v = AllocVec(len, Alloc(&allocated));
// Out of line storage used; allocation of 8 elements expected
EXPECT_THAT(allocated, len * sizeof(int));
EXPECT_THAT(v, AllOf(SizeIs(len), Each(0)));
}
}
} // anonymous namespace

@ -0,0 +1,177 @@
// Copyright 2018 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// Helper class to perform the Empty Base Optimization.
// Ts can contain classes and non-classes, empty or not. For the ones that
// are empty classes, we perform the optimization. If all types in Ts are empty
// classes, then CompressedTuple<Ts...> is itself an empty class.
//
// To access the members, use member get<N>() function.
//
// Eg:
// absl::container_internal::CompressedTuple<int, T1, T2, T3> value(7, t1, t2,
// t3);
// assert(value.get<0>() == 7);
// T1& t1 = value.get<1>();
// const T2& t2 = value.get<2>();
// ...
//
// http://en.cppreference.com/w/cpp/language/ebo
#ifndef ABSL_CONTAINER_INTERNAL_COMPRESSED_TUPLE_H_
#define ABSL_CONTAINER_INTERNAL_COMPRESSED_TUPLE_H_
#include <tuple>
#include <type_traits>
#include <utility>
#include "absl/utility/utility.h"
#ifdef _MSC_VER
// We need to mark these classes with this declspec to ensure that
// CompressedTuple happens.
#define ABSL_INTERNAL_COMPRESSED_TUPLE_DECLSPEC __declspec(empty_bases)
#else // _MSC_VER
#define ABSL_INTERNAL_COMPRESSED_TUPLE_DECLSPEC
#endif // _MSC_VER
namespace absl {
inline namespace lts_2018_12_18 {
namespace container_internal {
template <typename... Ts>
class CompressedTuple;
namespace internal_compressed_tuple {
template <typename D, size_t I>
struct Elem;
template <typename... B, size_t I>
struct Elem<CompressedTuple<B...>, I>
: std::tuple_element<I, std::tuple<B...>> {};
template <typename D, size_t I>
using ElemT = typename Elem<D, I>::type;
// Use the __is_final intrinsic if available. Where it's not available, classes
// declared with the 'final' specifier cannot be used as CompressedTuple
// elements.
// TODO(sbenza): Replace this with std::is_final in C++14.
template <typename T>
constexpr bool IsFinal() {
#if defined(__clang__) || defined(__GNUC__)
return __is_final(T);
#else
return false;
#endif
}
template <typename T>
constexpr bool ShouldUseBase() {
return std::is_class<T>::value && std::is_empty<T>::value && !IsFinal<T>();
}
// The storage class provides two specializations:
// - For empty classes, it stores T as a base class.
// - For everything else, it stores T as a member.
template <typename D, size_t I, bool = ShouldUseBase<ElemT<D, I>>()>
struct Storage {
using T = ElemT<D, I>;
T value;
constexpr Storage() = default;
explicit constexpr Storage(T&& v) : value(absl::forward<T>(v)) {}
constexpr const T& get() const { return value; }
T& get() { return value; }
};
template <typename D, size_t I>
struct ABSL_INTERNAL_COMPRESSED_TUPLE_DECLSPEC Storage<D, I, true>
: ElemT<D, I> {
using T = internal_compressed_tuple::ElemT<D, I>;
constexpr Storage() = default;
explicit constexpr Storage(T&& v) : T(absl::forward<T>(v)) {}
constexpr const T& get() const { return *this; }
T& get() { return *this; }
};
template <typename D, typename I>
struct ABSL_INTERNAL_COMPRESSED_TUPLE_DECLSPEC CompressedTupleImpl;
template <typename... Ts, size_t... I>
struct ABSL_INTERNAL_COMPRESSED_TUPLE_DECLSPEC
CompressedTupleImpl<CompressedTuple<Ts...>, absl::index_sequence<I...>>
// We use the dummy identity function through std::integral_constant to
// convince MSVC of accepting and expanding I in that context. Without it
// you would get:
// error C3548: 'I': parameter pack cannot be used in this context
: Storage<CompressedTuple<Ts...>,
std::integral_constant<size_t, I>::value>... {
constexpr CompressedTupleImpl() = default;
explicit constexpr CompressedTupleImpl(Ts&&... args)
: Storage<CompressedTuple<Ts...>, I>(absl::forward<Ts>(args))... {}
};
} // namespace internal_compressed_tuple
// Helper class to perform the Empty Base Class Optimization.
// Ts can contain classes and non-classes, empty or not. For the ones that
// are empty classes, we perform the CompressedTuple. If all types in Ts are
// empty classes, then CompressedTuple<Ts...> is itself an empty class.
//
// To access the members, use member .get<N>() function.
//
// Eg:
// absl::container_internal::CompressedTuple<int, T1, T2, T3> value(7, t1, t2,
// t3);
// assert(value.get<0>() == 7);
// T1& t1 = value.get<1>();
// const T2& t2 = value.get<2>();
// ...
//
// http://en.cppreference.com/w/cpp/language/ebo
template <typename... Ts>
class ABSL_INTERNAL_COMPRESSED_TUPLE_DECLSPEC CompressedTuple
: private internal_compressed_tuple::CompressedTupleImpl<
CompressedTuple<Ts...>, absl::index_sequence_for<Ts...>> {
private:
template <int I>
using ElemT = internal_compressed_tuple::ElemT<CompressedTuple, I>;
public:
constexpr CompressedTuple() = default;
explicit constexpr CompressedTuple(Ts... base)
: CompressedTuple::CompressedTupleImpl(absl::forward<Ts>(base)...) {}
template <int I>
ElemT<I>& get() {
return internal_compressed_tuple::Storage<CompressedTuple, I>::get();
}
template <int I>
constexpr const ElemT<I>& get() const {
return internal_compressed_tuple::Storage<CompressedTuple, I>::get();
}
};
// Explicit specialization for a zero-element tuple
// (needed to avoid ambiguous overloads for the default constructor).
template <>
class ABSL_INTERNAL_COMPRESSED_TUPLE_DECLSPEC CompressedTuple<> {};
} // namespace container_internal
} // inline namespace lts_2018_12_18
} // namespace absl
#undef ABSL_INTERNAL_COMPRESSED_TUPLE_DECLSPEC
#endif // ABSL_CONTAINER_INTERNAL_COMPRESSED_TUPLE_H_

@ -0,0 +1,168 @@
// Copyright 2018 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "absl/container/internal/compressed_tuple.h"
#include <string>
#include "gmock/gmock.h"
#include "gtest/gtest.h"
namespace absl {
inline namespace lts_2018_12_18 {
namespace container_internal {
namespace {
template <int>
struct Empty {};
template <typename T>
struct NotEmpty {
T value;
};
template <typename T, typename U>
struct TwoValues {
T value1;
U value2;
};
TEST(CompressedTupleTest, Sizeof) {
EXPECT_EQ(sizeof(int), sizeof(CompressedTuple<int>));
EXPECT_EQ(sizeof(int), sizeof(CompressedTuple<int, Empty<0>>));
EXPECT_EQ(sizeof(int), sizeof(CompressedTuple<int, Empty<0>, Empty<1>>));
EXPECT_EQ(sizeof(int),
sizeof(CompressedTuple<int, Empty<0>, Empty<1>, Empty<2>>));
EXPECT_EQ(sizeof(TwoValues<int, double>),
sizeof(CompressedTuple<int, NotEmpty<double>>));
EXPECT_EQ(sizeof(TwoValues<int, double>),
sizeof(CompressedTuple<int, Empty<0>, NotEmpty<double>>));
EXPECT_EQ(sizeof(TwoValues<int, double>),
sizeof(CompressedTuple<int, Empty<0>, NotEmpty<double>, Empty<1>>));
}
TEST(CompressedTupleTest, Access) {
struct S {
std::string x;
};
CompressedTuple<int, Empty<0>, S> x(7, {}, S{"ABC"});
EXPECT_EQ(sizeof(x), sizeof(TwoValues<int, S>));
EXPECT_EQ(7, x.get<0>());
EXPECT_EQ("ABC", x.get<2>().x);
}
TEST(CompressedTupleTest, NonClasses) {
CompressedTuple<int, const char*> x(7, "ABC");
EXPECT_EQ(7, x.get<0>());
EXPECT_STREQ("ABC", x.get<1>());
}
TEST(CompressedTupleTest, MixClassAndNonClass) {
CompressedTuple<int, const char*, Empty<0>, NotEmpty<double>> x(7, "ABC", {},
{1.25});
struct Mock {
int v;
const char* p;
double d;
};
EXPECT_EQ(sizeof(x), sizeof(Mock));
EXPECT_EQ(7, x.get<0>());
EXPECT_STREQ("ABC", x.get<1>());
EXPECT_EQ(1.25, x.get<3>().value);
}
TEST(CompressedTupleTest, Nested) {
CompressedTuple<int, CompressedTuple<int>,
CompressedTuple<int, CompressedTuple<int>>>
x(1, CompressedTuple<int>(2),
CompressedTuple<int, CompressedTuple<int>>(3, CompressedTuple<int>(4)));
EXPECT_EQ(1, x.get<0>());
EXPECT_EQ(2, x.get<1>().get<0>());
EXPECT_EQ(3, x.get<2>().get<0>());
EXPECT_EQ(4, x.get<2>().get<1>().get<0>());
CompressedTuple<Empty<0>, Empty<0>,
CompressedTuple<Empty<0>, CompressedTuple<Empty<0>>>>
y;
std::set<Empty<0>*> empties{&y.get<0>(), &y.get<1>(), &y.get<2>().get<0>(),
&y.get<2>().get<1>().get<0>()};
#ifdef _MSC_VER
// MSVC has a bug where many instances of the same base class are layed out in
// the same address when using __declspec(empty_bases).
// This will be fixed in a future version of MSVC.
int expected = 1;
#else
int expected = 4;
#endif
EXPECT_EQ(expected, sizeof(y));
EXPECT_EQ(expected, empties.size());
EXPECT_EQ(sizeof(y), sizeof(Empty<0>) * empties.size());
EXPECT_EQ(4 * sizeof(char),
sizeof(CompressedTuple<CompressedTuple<char, char>,
CompressedTuple<char, char>>));
EXPECT_TRUE(
(std::is_empty<CompressedTuple<CompressedTuple<Empty<0>>,
CompressedTuple<Empty<1>>>>::value));
}
TEST(CompressedTupleTest, Reference) {
int i = 7;
std::string s = "Very long std::string that goes in the heap";
CompressedTuple<int, int&, std::string, std::string&> x(i, i, s, s);
// Sanity check. We should have not moved from `s`
EXPECT_EQ(s, "Very long std::string that goes in the heap");
EXPECT_EQ(x.get<0>(), x.get<1>());
EXPECT_NE(&x.get<0>(), &x.get<1>());
EXPECT_EQ(&x.get<1>(), &i);
EXPECT_EQ(x.get<2>(), x.get<3>());
EXPECT_NE(&x.get<2>(), &x.get<3>());
EXPECT_EQ(&x.get<3>(), &s);
}
TEST(CompressedTupleTest, NoElements) {
CompressedTuple<> x;
static_cast<void>(x); // Silence -Wunused-variable.
EXPECT_TRUE(std::is_empty<CompressedTuple<>>::value);
}
TEST(CompressedTupleTest, Constexpr) {
constexpr CompressedTuple<int, double, CompressedTuple<int>> x(
7, 1.25, CompressedTuple<int>(5));
constexpr int x0 = x.get<0>();
constexpr double x1 = x.get<1>();
constexpr int x2 = x.get<2>().get<0>();
EXPECT_EQ(x0, 7);
EXPECT_EQ(x1, 1.25);
EXPECT_EQ(x2, 5);
}
#if defined(__clang__) || defined(__GNUC__)
TEST(CompressedTupleTest, EmptyFinalClass) {
struct S final {
int f() const { return 5; }
};
CompressedTuple<S> x;
EXPECT_EQ(x.get<0>().f(), 5);
}
#endif
} // namespace
} // namespace container_internal
} // inline namespace lts_2018_12_18
} // namespace absl

@ -0,0 +1,407 @@
// Copyright 2018 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#ifndef ABSL_CONTAINER_INTERNAL_CONTAINER_MEMORY_H_
#define ABSL_CONTAINER_INTERNAL_CONTAINER_MEMORY_H_
#ifdef ADDRESS_SANITIZER
#include <sanitizer/asan_interface.h>
#endif
#ifdef MEMORY_SANITIZER
#include <sanitizer/msan_interface.h>
#endif
#include <cassert>
#include <cstddef>
#include <memory>
#include <tuple>
#include <type_traits>
#include <utility>
#include "absl/memory/memory.h"
#include "absl/utility/utility.h"
namespace absl {
inline namespace lts_2018_12_18 {
namespace container_internal {
// Allocates at least n bytes aligned to the specified alignment.
// Alignment must be a power of 2. It must be positive.
//
// Note that many allocators don't honor alignment requirements above certain
// threshold (usually either alignof(std::max_align_t) or alignof(void*)).
// Allocate() doesn't apply alignment corrections. If the underlying allocator
// returns insufficiently alignment pointer, that's what you are going to get.
template <size_t Alignment, class Alloc>
void* Allocate(Alloc* alloc, size_t n) {
static_assert(Alignment > 0, "");
assert(n && "n must be positive");
struct alignas(Alignment) M {};
using A = typename absl::allocator_traits<Alloc>::template rebind_alloc<M>;
using AT = typename absl::allocator_traits<Alloc>::template rebind_traits<M>;
A mem_alloc(*alloc);
void* p = AT::allocate(mem_alloc, (n + sizeof(M) - 1) / sizeof(M));
assert(reinterpret_cast<uintptr_t>(p) % Alignment == 0 &&
"allocator does not respect alignment");
return p;
}
// The pointer must have been previously obtained by calling
// Allocate<Alignment>(alloc, n).
template <size_t Alignment, class Alloc>
void Deallocate(Alloc* alloc, void* p, size_t n) {
static_assert(Alignment > 0, "");
assert(n && "n must be positive");
struct alignas(Alignment) M {};
using A = typename absl::allocator_traits<Alloc>::template rebind_alloc<M>;
using AT = typename absl::allocator_traits<Alloc>::template rebind_traits<M>;
A mem_alloc(*alloc);
AT::deallocate(mem_alloc, static_cast<M*>(p),
(n + sizeof(M) - 1) / sizeof(M));
}
namespace memory_internal {
// Constructs T into uninitialized storage pointed by `ptr` using the args
// specified in the tuple.
template <class Alloc, class T, class Tuple, size_t... I>
void ConstructFromTupleImpl(Alloc* alloc, T* ptr, Tuple&& t,
absl::index_sequence<I...>) {
absl::allocator_traits<Alloc>::construct(
*alloc, ptr, std::get<I>(std::forward<Tuple>(t))...);
}
template <class T, class F>
struct WithConstructedImplF {
template <class... Args>
decltype(std::declval<F>()(std::declval<T>())) operator()(
Args&&... args) const {
return std::forward<F>(f)(T(std::forward<Args>(args)...));
}
F&& f;
};
template <class T, class Tuple, size_t... Is, class F>
decltype(std::declval<F>()(std::declval<T>())) WithConstructedImpl(
Tuple&& t, absl::index_sequence<Is...>, F&& f) {
return WithConstructedImplF<T, F>{std::forward<F>(f)}(
std::get<Is>(std::forward<Tuple>(t))...);
}
template <class T, size_t... Is>
auto TupleRefImpl(T&& t, absl::index_sequence<Is...>)
-> decltype(std::forward_as_tuple(std::get<Is>(std::forward<T>(t))...)) {
return std::forward_as_tuple(std::get<Is>(std::forward<T>(t))...);
}
// Returns a tuple of references to the elements of the input tuple. T must be a
// tuple.
template <class T>
auto TupleRef(T&& t) -> decltype(
TupleRefImpl(std::forward<T>(t),
absl::make_index_sequence<
std::tuple_size<typename std::decay<T>::type>::value>())) {
return TupleRefImpl(
std::forward<T>(t),
absl::make_index_sequence<
std::tuple_size<typename std::decay<T>::type>::value>());
}
template <class F, class K, class V>
decltype(std::declval<F>()(std::declval<const K&>(), std::piecewise_construct,
std::declval<std::tuple<K>>(), std::declval<V>()))
DecomposePairImpl(F&& f, std::pair<std::tuple<K>, V> p) {
const auto& key = std::get<0>(p.first);
return std::forward<F>(f)(key, std::piecewise_construct, std::move(p.first),
std::move(p.second));
}
} // namespace memory_internal
// Constructs T into uninitialized storage pointed by `ptr` using the args
// specified in the tuple.
template <class Alloc, class T, class Tuple>
void ConstructFromTuple(Alloc* alloc, T* ptr, Tuple&& t) {
memory_internal::ConstructFromTupleImpl(
alloc, ptr, std::forward<Tuple>(t),
absl::make_index_sequence<
std::tuple_size<typename std::decay<Tuple>::type>::value>());
}
// Constructs T using the args specified in the tuple and calls F with the
// constructed value.
template <class T, class Tuple, class F>
decltype(std::declval<F>()(std::declval<T>())) WithConstructed(
Tuple&& t, F&& f) {
return memory_internal::WithConstructedImpl<T>(
std::forward<Tuple>(t),
absl::make_index_sequence<
std::tuple_size<typename std::decay<Tuple>::type>::value>(),
std::forward<F>(f));
}
// Given arguments of an std::pair's consructor, PairArgs() returns a pair of
// tuples with references to the passed arguments. The tuples contain
// constructor arguments for the first and the second elements of the pair.
//
// The following two snippets are equivalent.
//
// 1. std::pair<F, S> p(args...);
//
// 2. auto a = PairArgs(args...);
// std::pair<F, S> p(std::piecewise_construct,
// std::move(p.first), std::move(p.second));
inline std::pair<std::tuple<>, std::tuple<>> PairArgs() { return {}; }
template <class F, class S>
std::pair<std::tuple<F&&>, std::tuple<S&&>> PairArgs(F&& f, S&& s) {
return {std::piecewise_construct, std::forward_as_tuple(std::forward<F>(f)),
std::forward_as_tuple(std::forward<S>(s))};
}
template <class F, class S>
std::pair<std::tuple<const F&>, std::tuple<const S&>> PairArgs(
const std::pair<F, S>& p) {
return PairArgs(p.first, p.second);
}
template <class F, class S>
std::pair<std::tuple<F&&>, std::tuple<S&&>> PairArgs(std::pair<F, S>&& p) {
return PairArgs(std::forward<F>(p.first), std::forward<S>(p.second));
}
template <class F, class S>
auto PairArgs(std::piecewise_construct_t, F&& f, S&& s)
-> decltype(std::make_pair(memory_internal::TupleRef(std::forward<F>(f)),
memory_internal::TupleRef(std::forward<S>(s)))) {
return std::make_pair(memory_internal::TupleRef(std::forward<F>(f)),
memory_internal::TupleRef(std::forward<S>(s)));
}
// A helper function for implementing apply() in map policies.
template <class F, class... Args>
auto DecomposePair(F&& f, Args&&... args)
-> decltype(memory_internal::DecomposePairImpl(
std::forward<F>(f), PairArgs(std::forward<Args>(args)...))) {
return memory_internal::DecomposePairImpl(
std::forward<F>(f), PairArgs(std::forward<Args>(args)...));
}
// A helper function for implementing apply() in set policies.
template <class F, class Arg>
decltype(std::declval<F>()(std::declval<const Arg&>(), std::declval<Arg>()))
DecomposeValue(F&& f, Arg&& arg) {
const auto& key = arg;
return std::forward<F>(f)(key, std::forward<Arg>(arg));
}
// Helper functions for asan and msan.
inline void SanitizerPoisonMemoryRegion(const void* m, size_t s) {
#ifdef ADDRESS_SANITIZER
ASAN_POISON_MEMORY_REGION(m, s);
#endif
#ifdef MEMORY_SANITIZER
__msan_poison(m, s);
#endif
(void)m;
(void)s;
}
inline void SanitizerUnpoisonMemoryRegion(const void* m, size_t s) {
#ifdef ADDRESS_SANITIZER
ASAN_UNPOISON_MEMORY_REGION(m, s);
#endif
#ifdef MEMORY_SANITIZER
__msan_unpoison(m, s);
#endif
(void)m;
(void)s;
}
template <typename T>
inline void SanitizerPoisonObject(const T* object) {
SanitizerPoisonMemoryRegion(object, sizeof(T));
}
template <typename T>
inline void SanitizerUnpoisonObject(const T* object) {
SanitizerUnpoisonMemoryRegion(object, sizeof(T));
}
namespace memory_internal {
// If Pair is a standard-layout type, OffsetOf<Pair>::kFirst and
// OffsetOf<Pair>::kSecond are equivalent to offsetof(Pair, first) and
// offsetof(Pair, second) respectively. Otherwise they are -1.
//
// The purpose of OffsetOf is to avoid calling offsetof() on non-standard-layout
// type, which is non-portable.
template <class Pair, class = std::true_type>
struct OffsetOf {
static constexpr size_t kFirst = -1;
static constexpr size_t kSecond = -1;
};
template <class Pair>
struct OffsetOf<Pair, typename std::is_standard_layout<Pair>::type> {
static constexpr size_t kFirst = offsetof(Pair, first);
static constexpr size_t kSecond = offsetof(Pair, second);
};
template <class K, class V>
struct IsLayoutCompatible {
private:
struct Pair {
K first;
V second;
};
// Is P layout-compatible with Pair?
template <class P>
static constexpr bool LayoutCompatible() {
return std::is_standard_layout<P>() && sizeof(P) == sizeof(Pair) &&
alignof(P) == alignof(Pair) &&
memory_internal::OffsetOf<P>::kFirst ==
memory_internal::OffsetOf<Pair>::kFirst &&
memory_internal::OffsetOf<P>::kSecond ==
memory_internal::OffsetOf<Pair>::kSecond;
}
public:
// Whether pair<const K, V> and pair<K, V> are layout-compatible. If they are,
// then it is safe to store them in a union and read from either.
static constexpr bool value = std::is_standard_layout<K>() &&
std::is_standard_layout<Pair>() &&
memory_internal::OffsetOf<Pair>::kFirst == 0 &&
LayoutCompatible<std::pair<K, V>>() &&
LayoutCompatible<std::pair<const K, V>>();
};
} // namespace memory_internal
// If kMutableKeys is false, only the value member is accessed.
//
// If kMutableKeys is true, key is accessed through all slots while value and
// mutable_value are accessed only via INITIALIZED slots. Slots are created and
// destroyed via mutable_value so that the key can be moved later.
template <class K, class V>
union slot_type {
private:
static void emplace(slot_type* slot) {
// The construction of union doesn't do anything at runtime but it allows us
// to access its members without violating aliasing rules.
new (slot) slot_type;
}
// If pair<const K, V> and pair<K, V> are layout-compatible, we can accept one
// or the other via slot_type. We are also free to access the key via
// slot_type::key in this case.
using kMutableKeys =
std::integral_constant<bool,
memory_internal::IsLayoutCompatible<K, V>::value>;
public:
slot_type() {}
~slot_type() = delete;
using value_type = std::pair<const K, V>;
using mutable_value_type = std::pair<K, V>;
value_type value;
mutable_value_type mutable_value;
K key;
template <class Allocator, class... Args>
static void construct(Allocator* alloc, slot_type* slot, Args&&... args) {
emplace(slot);
if (kMutableKeys::value) {
absl::allocator_traits<Allocator>::construct(*alloc, &slot->mutable_value,
std::forward<Args>(args)...);
} else {
absl::allocator_traits<Allocator>::construct(*alloc, &slot->value,
std::forward<Args>(args)...);
}
}
// Construct this slot by moving from another slot.
template <class Allocator>
static void construct(Allocator* alloc, slot_type* slot, slot_type* other) {
emplace(slot);
if (kMutableKeys::value) {
absl::allocator_traits<Allocator>::construct(
*alloc, &slot->mutable_value, std::move(other->mutable_value));
} else {
absl::allocator_traits<Allocator>::construct(*alloc, &slot->value,
std::move(other->value));
}
}
template <class Allocator>
static void destroy(Allocator* alloc, slot_type* slot) {
if (kMutableKeys::value) {
absl::allocator_traits<Allocator>::destroy(*alloc, &slot->mutable_value);
} else {
absl::allocator_traits<Allocator>::destroy(*alloc, &slot->value);
}
}
template <class Allocator>
static void transfer(Allocator* alloc, slot_type* new_slot,
slot_type* old_slot) {
emplace(new_slot);
if (kMutableKeys::value) {
absl::allocator_traits<Allocator>::construct(
*alloc, &new_slot->mutable_value, std::move(old_slot->mutable_value));
} else {
absl::allocator_traits<Allocator>::construct(*alloc, &new_slot->value,
std::move(old_slot->value));
}
destroy(alloc, old_slot);
}
template <class Allocator>
static void swap(Allocator* alloc, slot_type* a, slot_type* b) {
if (kMutableKeys::value) {
using std::swap;
swap(a->mutable_value, b->mutable_value);
} else {
value_type tmp = std::move(a->value);
absl::allocator_traits<Allocator>::destroy(*alloc, &a->value);
absl::allocator_traits<Allocator>::construct(*alloc, &a->value,
std::move(b->value));
absl::allocator_traits<Allocator>::destroy(*alloc, &b->value);
absl::allocator_traits<Allocator>::construct(*alloc, &b->value,
std::move(tmp));
}
}
template <class Allocator>
static void move(Allocator* alloc, slot_type* src, slot_type* dest) {
if (kMutableKeys::value) {
dest->mutable_value = std::move(src->mutable_value);
} else {
absl::allocator_traits<Allocator>::destroy(*alloc, &dest->value);
absl::allocator_traits<Allocator>::construct(*alloc, &dest->value,
std::move(src->value));
}
}
template <class Allocator>
static void move(Allocator* alloc, slot_type* first, slot_type* last,
slot_type* result) {
for (slot_type *src = first, *dest = result; src != last; ++src, ++dest)
move(alloc, src, dest);
}
};
} // namespace container_internal
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_CONTAINER_INTERNAL_CONTAINER_MEMORY_H_

@ -0,0 +1,190 @@
// Copyright 2018 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "absl/container/internal/container_memory.h"
#include <cstdint>
#include <tuple>
#include <utility>
#include "gmock/gmock.h"
#include "gtest/gtest.h"
#include "absl/strings/string_view.h"
namespace absl {
inline namespace lts_2018_12_18 {
namespace container_internal {
namespace {
using ::testing::Pair;
TEST(Memory, AlignmentLargerThanBase) {
std::allocator<int8_t> alloc;
void* mem = Allocate<2>(&alloc, 3);
EXPECT_EQ(0, reinterpret_cast<uintptr_t>(mem) % 2);
memcpy(mem, "abc", 3);
Deallocate<2>(&alloc, mem, 3);
}
TEST(Memory, AlignmentSmallerThanBase) {
std::allocator<int64_t> alloc;
void* mem = Allocate<2>(&alloc, 3);
EXPECT_EQ(0, reinterpret_cast<uintptr_t>(mem) % 2);
memcpy(mem, "abc", 3);
Deallocate<2>(&alloc, mem, 3);
}
class Fixture : public ::testing::Test {
using Alloc = std::allocator<std::string>;
public:
Fixture() { ptr_ = std::allocator_traits<Alloc>::allocate(*alloc(), 1); }
~Fixture() override {
std::allocator_traits<Alloc>::destroy(*alloc(), ptr_);
std::allocator_traits<Alloc>::deallocate(*alloc(), ptr_, 1);
}
std::string* ptr() { return ptr_; }
Alloc* alloc() { return &alloc_; }
private:
Alloc alloc_;
std::string* ptr_;
};
TEST_F(Fixture, ConstructNoArgs) {
ConstructFromTuple(alloc(), ptr(), std::forward_as_tuple());
EXPECT_EQ(*ptr(), "");
}
TEST_F(Fixture, ConstructOneArg) {
ConstructFromTuple(alloc(), ptr(), std::forward_as_tuple("abcde"));
EXPECT_EQ(*ptr(), "abcde");
}
TEST_F(Fixture, ConstructTwoArg) {
ConstructFromTuple(alloc(), ptr(), std::forward_as_tuple(5, 'a'));
EXPECT_EQ(*ptr(), "aaaaa");
}
TEST(PairArgs, NoArgs) {
EXPECT_THAT(PairArgs(),
Pair(std::forward_as_tuple(), std::forward_as_tuple()));
}
TEST(PairArgs, TwoArgs) {
EXPECT_EQ(
std::make_pair(std::forward_as_tuple(1), std::forward_as_tuple('A')),
PairArgs(1, 'A'));
}
TEST(PairArgs, Pair) {
EXPECT_EQ(
std::make_pair(std::forward_as_tuple(1), std::forward_as_tuple('A')),
PairArgs(std::make_pair(1, 'A')));
}
TEST(PairArgs, Piecewise) {
EXPECT_EQ(
std::make_pair(std::forward_as_tuple(1), std::forward_as_tuple('A')),
PairArgs(std::piecewise_construct, std::forward_as_tuple(1),
std::forward_as_tuple('A')));
}
TEST(WithConstructed, Simple) {
EXPECT_EQ(1, WithConstructed<absl::string_view>(
std::make_tuple(std::string("a")),
[](absl::string_view str) { return str.size(); }));
}
template <class F, class Arg>
decltype(DecomposeValue(std::declval<F>(), std::declval<Arg>()))
DecomposeValueImpl(int, F&& f, Arg&& arg) {
return DecomposeValue(std::forward<F>(f), std::forward<Arg>(arg));
}
template <class F, class Arg>
const char* DecomposeValueImpl(char, F&& f, Arg&& arg) {
return "not decomposable";
}
template <class F, class Arg>
decltype(DecomposeValueImpl(0, std::declval<F>(), std::declval<Arg>()))
TryDecomposeValue(F&& f, Arg&& arg) {
return DecomposeValueImpl(0, std::forward<F>(f), std::forward<Arg>(arg));
}
TEST(DecomposeValue, Decomposable) {
auto f = [](const int& x, int&& y) {
EXPECT_EQ(&x, &y);
EXPECT_EQ(42, x);
return 'A';
};
EXPECT_EQ('A', TryDecomposeValue(f, 42));
}
TEST(DecomposeValue, NotDecomposable) {
auto f = [](void*) {
ADD_FAILURE() << "Must not be called";
return 'A';
};
EXPECT_STREQ("not decomposable", TryDecomposeValue(f, 42));
}
template <class F, class... Args>
decltype(DecomposePair(std::declval<F>(), std::declval<Args>()...))
DecomposePairImpl(int, F&& f, Args&&... args) {
return DecomposePair(std::forward<F>(f), std::forward<Args>(args)...);
}
template <class F, class... Args>
const char* DecomposePairImpl(char, F&& f, Args&&... args) {
return "not decomposable";
}
template <class F, class... Args>
decltype(DecomposePairImpl(0, std::declval<F>(), std::declval<Args>()...))
TryDecomposePair(F&& f, Args&&... args) {
return DecomposePairImpl(0, std::forward<F>(f), std::forward<Args>(args)...);
}
TEST(DecomposePair, Decomposable) {
auto f = [](const int& x, std::piecewise_construct_t, std::tuple<int&&> k,
std::tuple<double>&& v) {
EXPECT_EQ(&x, &std::get<0>(k));
EXPECT_EQ(42, x);
EXPECT_EQ(0.5, std::get<0>(v));
return 'A';
};
EXPECT_EQ('A', TryDecomposePair(f, 42, 0.5));
EXPECT_EQ('A', TryDecomposePair(f, std::make_pair(42, 0.5)));
EXPECT_EQ('A', TryDecomposePair(f, std::piecewise_construct,
std::make_tuple(42), std::make_tuple(0.5)));
}
TEST(DecomposePair, NotDecomposable) {
auto f = [](...) {
ADD_FAILURE() << "Must not be called";
return 'A';
};
EXPECT_STREQ("not decomposable",
TryDecomposePair(f));
EXPECT_STREQ("not decomposable",
TryDecomposePair(f, std::piecewise_construct, std::make_tuple(),
std::make_tuple(0.5)));
}
} // namespace
} // namespace container_internal
} // inline namespace lts_2018_12_18
} // namespace absl

@ -0,0 +1,145 @@
// Copyright 2018 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// Define the default Hash and Eq functions for SwissTable containers.
//
// std::hash<T> and std::equal_to<T> are not appropriate hash and equal
// functions for SwissTable containers. There are two reasons for this.
//
// SwissTable containers are power of 2 sized containers:
//
// This means they use the lower bits of the hash value to find the slot for
// each entry. The typical hash function for integral types is the identity.
// This is a very weak hash function for SwissTable and any power of 2 sized
// hashtable implementation which will lead to excessive collisions. For
// SwissTable we use murmur3 style mixing to reduce collisions to a minimum.
//
// SwissTable containers support heterogeneous lookup:
//
// In order to make heterogeneous lookup work, hash and equal functions must be
// polymorphic. At the same time they have to satisfy the same requirements the
// C++ standard imposes on hash functions and equality operators. That is:
//
// if hash_default_eq<T>(a, b) returns true for any a and b of type T, then
// hash_default_hash<T>(a) must equal hash_default_hash<T>(b)
//
// For SwissTable containers this requirement is relaxed to allow a and b of
// any and possibly different types. Note that like the standard the hash and
// equal functions are still bound to T. This is important because some type U
// can be hashed by/tested for equality differently depending on T. A notable
// example is `const char*`. `const char*` is treated as a c-style string when
// the hash function is hash<string> but as a pointer when the hash function is
// hash<void*>.
//
#ifndef ABSL_CONTAINER_INTERNAL_HASH_FUNCTION_DEFAULTS_H_
#define ABSL_CONTAINER_INTERNAL_HASH_FUNCTION_DEFAULTS_H_
#include <stdint.h>
#include <cstddef>
#include <memory>
#include <string>
#include <type_traits>
#include "absl/base/config.h"
#include "absl/hash/hash.h"
#include "absl/strings/string_view.h"
namespace absl {
inline namespace lts_2018_12_18 {
namespace container_internal {
// The hash of an object of type T is computed by using absl::Hash.
template <class T, class E = void>
struct HashEq {
using Hash = absl::Hash<T>;
using Eq = std::equal_to<T>;
};
struct StringHash {
using is_transparent = void;
size_t operator()(absl::string_view v) const {
return absl::Hash<absl::string_view>{}(v);
}
};
// Supports heterogeneous lookup for string-like elements.
struct StringHashEq {
using Hash = StringHash;
struct Eq {
using is_transparent = void;
bool operator()(absl::string_view lhs, absl::string_view rhs) const {
return lhs == rhs;
}
};
};
template <>
struct HashEq<std::string> : StringHashEq {};
template <>
struct HashEq<absl::string_view> : StringHashEq {};
// Supports heterogeneous lookup for pointers and smart pointers.
template <class T>
struct HashEq<T*> {
struct Hash {
using is_transparent = void;
template <class U>
size_t operator()(const U& ptr) const {
return absl::Hash<const T*>{}(HashEq::ToPtr(ptr));
}
};
struct Eq {
using is_transparent = void;
template <class A, class B>
bool operator()(const A& a, const B& b) const {
return HashEq::ToPtr(a) == HashEq::ToPtr(b);
}
};
private:
static const T* ToPtr(const T* ptr) { return ptr; }
template <class U, class D>
static const T* ToPtr(const std::unique_ptr<U, D>& ptr) {
return ptr.get();
}
template <class U>
static const T* ToPtr(const std::shared_ptr<U>& ptr) {
return ptr.get();
}
};
template <class T, class D>
struct HashEq<std::unique_ptr<T, D>> : HashEq<T*> {};
template <class T>
struct HashEq<std::shared_ptr<T>> : HashEq<T*> {};
// This header's visibility is restricted. If you need to access the default
// hasher please use the container's ::hasher alias instead.
//
// Example: typename Hash = typename absl::flat_hash_map<K, V>::hasher
template <class T>
using hash_default_hash = typename container_internal::HashEq<T>::Hash;
// This header's visibility is restricted. If you need to access the default
// key equal please use the container's ::key_equal alias instead.
//
// Example: typename Eq = typename absl::flat_hash_map<K, V, Hash>::key_equal
template <class T>
using hash_default_eq = typename container_internal::HashEq<T>::Eq;
} // namespace container_internal
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_CONTAINER_INTERNAL_HASH_FUNCTION_DEFAULTS_H_

@ -0,0 +1,303 @@
// Copyright 2018 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "absl/container/internal/hash_function_defaults.h"
#include <functional>
#include <type_traits>
#include <utility>
#include "gtest/gtest.h"
#include "absl/strings/string_view.h"
namespace absl {
inline namespace lts_2018_12_18 {
namespace container_internal {
namespace {
using ::testing::Types;
TEST(Eq, Int32) {
hash_default_eq<int32_t> eq;
EXPECT_TRUE(eq(1, 1u));
EXPECT_TRUE(eq(1, char{1}));
EXPECT_TRUE(eq(1, true));
EXPECT_TRUE(eq(1, double{1.1}));
EXPECT_FALSE(eq(1, char{2}));
EXPECT_FALSE(eq(1, 2u));
EXPECT_FALSE(eq(1, false));
EXPECT_FALSE(eq(1, 2.));
}
TEST(Hash, Int32) {
hash_default_hash<int32_t> hash;
auto h = hash(1);
EXPECT_EQ(h, hash(1u));
EXPECT_EQ(h, hash(char{1}));
EXPECT_EQ(h, hash(true));
EXPECT_EQ(h, hash(double{1.1}));
EXPECT_NE(h, hash(2u));
EXPECT_NE(h, hash(char{2}));
EXPECT_NE(h, hash(false));
EXPECT_NE(h, hash(2.));
}
enum class MyEnum { A, B, C, D };
TEST(Eq, Enum) {
hash_default_eq<MyEnum> eq;
EXPECT_TRUE(eq(MyEnum::A, MyEnum::A));
EXPECT_FALSE(eq(MyEnum::A, MyEnum::B));
}
TEST(Hash, Enum) {
hash_default_hash<MyEnum> hash;
for (MyEnum e : {MyEnum::A, MyEnum::B, MyEnum::C}) {
auto h = hash(e);
EXPECT_EQ(h, hash_default_hash<int>{}(static_cast<int>(e)));
EXPECT_NE(h, hash(MyEnum::D));
}
}
using StringTypes = ::testing::Types<std::string, absl::string_view>;
template <class T>
struct EqString : ::testing::Test {
hash_default_eq<T> key_eq;
};
TYPED_TEST_CASE(EqString, StringTypes);
template <class T>
struct HashString : ::testing::Test {
hash_default_hash<T> hasher;
};
TYPED_TEST_CASE(HashString, StringTypes);
TYPED_TEST(EqString, Works) {
auto eq = this->key_eq;
EXPECT_TRUE(eq("a", "a"));
EXPECT_TRUE(eq("a", absl::string_view("a")));
EXPECT_TRUE(eq("a", std::string("a")));
EXPECT_FALSE(eq("a", "b"));
EXPECT_FALSE(eq("a", absl::string_view("b")));
EXPECT_FALSE(eq("a", std::string("b")));
}
TYPED_TEST(HashString, Works) {
auto hash = this->hasher;
auto h = hash("a");
EXPECT_EQ(h, hash(absl::string_view("a")));
EXPECT_EQ(h, hash(std::string("a")));
EXPECT_NE(h, hash(absl::string_view("b")));
EXPECT_NE(h, hash(std::string("b")));
}
struct NoDeleter {
template <class T>
void operator()(const T* ptr) const {}
};
using PointerTypes =
::testing::Types<const int*, int*, std::unique_ptr<const int>,
std::unique_ptr<const int, NoDeleter>,
std::unique_ptr<int>, std::unique_ptr<int, NoDeleter>,
std::shared_ptr<const int>, std::shared_ptr<int>>;
template <class T>
struct EqPointer : ::testing::Test {
hash_default_eq<T> key_eq;
};
TYPED_TEST_CASE(EqPointer, PointerTypes);
template <class T>
struct HashPointer : ::testing::Test {
hash_default_hash<T> hasher;
};
TYPED_TEST_CASE(HashPointer, PointerTypes);
TYPED_TEST(EqPointer, Works) {
int dummy;
auto eq = this->key_eq;
auto sptr = std::make_shared<int>();
std::shared_ptr<const int> csptr = sptr;
int* ptr = sptr.get();
const int* cptr = ptr;
std::unique_ptr<int, NoDeleter> uptr(ptr);
std::unique_ptr<const int, NoDeleter> cuptr(ptr);
EXPECT_TRUE(eq(ptr, cptr));
EXPECT_TRUE(eq(ptr, sptr));
EXPECT_TRUE(eq(ptr, uptr));
EXPECT_TRUE(eq(ptr, csptr));
EXPECT_TRUE(eq(ptr, cuptr));
EXPECT_FALSE(eq(&dummy, cptr));
EXPECT_FALSE(eq(&dummy, sptr));
EXPECT_FALSE(eq(&dummy, uptr));
EXPECT_FALSE(eq(&dummy, csptr));
EXPECT_FALSE(eq(&dummy, cuptr));
}
TEST(Hash, DerivedAndBase) {
struct Base {};
struct Derived : Base {};
hash_default_hash<Base*> hasher;
Base base;
Derived derived;
EXPECT_NE(hasher(&base), hasher(&derived));
EXPECT_EQ(hasher(static_cast<Base*>(&derived)), hasher(&derived));
auto dp = std::make_shared<Derived>();
EXPECT_EQ(hasher(static_cast<Base*>(dp.get())), hasher(dp));
}
TEST(Hash, FunctionPointer) {
using Func = int (*)();
hash_default_hash<Func> hasher;
hash_default_eq<Func> eq;
Func p1 = [] { return 1; }, p2 = [] { return 2; };
EXPECT_EQ(hasher(p1), hasher(p1));
EXPECT_TRUE(eq(p1, p1));
EXPECT_NE(hasher(p1), hasher(p2));
EXPECT_FALSE(eq(p1, p2));
}
TYPED_TEST(HashPointer, Works) {
int dummy;
auto hash = this->hasher;
auto sptr = std::make_shared<int>();
std::shared_ptr<const int> csptr = sptr;
int* ptr = sptr.get();
const int* cptr = ptr;
std::unique_ptr<int, NoDeleter> uptr(ptr);
std::unique_ptr<const int, NoDeleter> cuptr(ptr);
EXPECT_EQ(hash(ptr), hash(cptr));
EXPECT_EQ(hash(ptr), hash(sptr));
EXPECT_EQ(hash(ptr), hash(uptr));
EXPECT_EQ(hash(ptr), hash(csptr));
EXPECT_EQ(hash(ptr), hash(cuptr));
EXPECT_NE(hash(&dummy), hash(cptr));
EXPECT_NE(hash(&dummy), hash(sptr));
EXPECT_NE(hash(&dummy), hash(uptr));
EXPECT_NE(hash(&dummy), hash(csptr));
EXPECT_NE(hash(&dummy), hash(cuptr));
}
// Cartesian product of (string, std::string, absl::string_view)
// with (string, std::string, absl::string_view, const char*).
using StringTypesCartesianProduct = Types<
// clang-format off
std::pair<std::string, std::string>,
std::pair<std::string, absl::string_view>,
std::pair<std::string, const char*>,
std::pair<absl::string_view, std::string>,
std::pair<absl::string_view, absl::string_view>,
std::pair<absl::string_view, const char*>>;
// clang-format on
constexpr char kFirstString[] = "abc123";
constexpr char kSecondString[] = "ijk456";
template <typename T>
struct StringLikeTest : public ::testing::Test {
typename T::first_type a1{kFirstString};
typename T::second_type b1{kFirstString};
typename T::first_type a2{kSecondString};
typename T::second_type b2{kSecondString};
hash_default_eq<typename T::first_type> eq;
hash_default_hash<typename T::first_type> hash;
};
TYPED_TEST_CASE_P(StringLikeTest);
TYPED_TEST_P(StringLikeTest, Eq) {
EXPECT_TRUE(this->eq(this->a1, this->b1));
EXPECT_TRUE(this->eq(this->b1, this->a1));
}
TYPED_TEST_P(StringLikeTest, NotEq) {
EXPECT_FALSE(this->eq(this->a1, this->b2));
EXPECT_FALSE(this->eq(this->b2, this->a1));
}
TYPED_TEST_P(StringLikeTest, HashEq) {
EXPECT_EQ(this->hash(this->a1), this->hash(this->b1));
EXPECT_EQ(this->hash(this->a2), this->hash(this->b2));
// It would be a poor hash function which collides on these strings.
EXPECT_NE(this->hash(this->a1), this->hash(this->b2));
}
TYPED_TEST_CASE(StringLikeTest, StringTypesCartesianProduct);
} // namespace
} // namespace container_internal
} // inline namespace lts_2018_12_18
} // namespace absl
enum Hash : size_t {
kStd = 0x2, // std::hash
#ifdef _MSC_VER
kExtension = kStd, // In MSVC, std::hash == ::hash
#else // _MSC_VER
kExtension = 0x4, // ::hash (GCC extension)
#endif // _MSC_VER
};
// H is a bitmask of Hash enumerations.
// Hashable<H> is hashable via all means specified in H.
template <int H>
struct Hashable {
static constexpr bool HashableBy(Hash h) { return h & H; }
};
namespace std {
template <int H>
struct hash<Hashable<H>> {
template <class E = Hashable<H>,
class = typename std::enable_if<E::HashableBy(kStd)>::type>
size_t operator()(E) const {
return kStd;
}
};
} // namespace std
namespace absl {
inline namespace lts_2018_12_18 {
namespace container_internal {
namespace {
template <class T>
size_t Hash(const T& v) {
return hash_default_hash<T>()(v);
}
TEST(Delegate, HashDispatch) {
EXPECT_EQ(Hash(kStd), Hash(Hashable<kStd>()));
}
} // namespace
} // namespace container_internal
} // inline namespace lts_2018_12_18
} // namespace absl

@ -0,0 +1,74 @@
// Copyright 2018 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "absl/container/internal/hash_generator_testing.h"
#include <deque>
namespace absl {
inline namespace lts_2018_12_18 {
namespace container_internal {
namespace hash_internal {
namespace {
class RandomDeviceSeedSeq {
public:
using result_type = typename std::random_device::result_type;
template <class Iterator>
void generate(Iterator start, Iterator end) {
while (start != end) {
*start = gen_();
++start;
}
}
private:
std::random_device gen_;
};
} // namespace
std::mt19937_64* GetSharedRng() {
RandomDeviceSeedSeq seed_seq;
static auto* rng = new std::mt19937_64(seed_seq);
return rng;
}
std::string Generator<std::string>::operator()() const {
// NOLINTNEXTLINE(runtime/int)
std::uniform_int_distribution<short> chars(0x20, 0x7E);
std::string res;
res.resize(32);
std::generate(res.begin(), res.end(),
[&]() { return chars(*GetSharedRng()); });
return res;
}
absl::string_view Generator<absl::string_view>::operator()() const {
static auto* arena = new std::deque<std::string>();
// NOLINTNEXTLINE(runtime/int)
std::uniform_int_distribution<short> chars(0x20, 0x7E);
arena->emplace_back();
auto& res = arena->back();
res.resize(32);
std::generate(res.begin(), res.end(),
[&]() { return chars(*GetSharedRng()); });
return res;
}
} // namespace hash_internal
} // namespace container_internal
} // inline namespace lts_2018_12_18
} // namespace absl

@ -0,0 +1,152 @@
// Copyright 2018 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// Generates random values for testing. Specialized only for the few types we
// care about.
#ifndef ABSL_CONTAINER_INTERNAL_HASH_GENERATOR_TESTING_H_
#define ABSL_CONTAINER_INTERNAL_HASH_GENERATOR_TESTING_H_
#include <stdint.h>
#include <algorithm>
#include <iosfwd>
#include <random>
#include <tuple>
#include <type_traits>
#include <utility>
#include "absl/container/internal/hash_policy_testing.h"
#include "absl/meta/type_traits.h"
#include "absl/strings/string_view.h"
namespace absl {
inline namespace lts_2018_12_18 {
namespace container_internal {
namespace hash_internal {
namespace generator_internal {
template <class Container, class = void>
struct IsMap : std::false_type {};
template <class Map>
struct IsMap<Map, absl::void_t<typename Map::mapped_type>> : std::true_type {};
} // namespace generator_internal
std::mt19937_64* GetSharedRng();
enum Enum {
kEnumEmpty,
kEnumDeleted,
};
enum class EnumClass : uint64_t {
kEmpty,
kDeleted,
};
inline std::ostream& operator<<(std::ostream& o, const EnumClass& ec) {
return o << static_cast<uint64_t>(ec);
}
template <class T, class E = void>
struct Generator;
template <class T>
struct Generator<T, typename std::enable_if<std::is_integral<T>::value>::type> {
T operator()() const {
std::uniform_int_distribution<T> dist;
return dist(*GetSharedRng());
}
};
template <>
struct Generator<Enum> {
Enum operator()() const {
std::uniform_int_distribution<typename std::underlying_type<Enum>::type>
dist;
while (true) {
auto variate = dist(*GetSharedRng());
if (variate != kEnumEmpty && variate != kEnumDeleted)
return static_cast<Enum>(variate);
}
}
};
template <>
struct Generator<EnumClass> {
EnumClass operator()() const {
std::uniform_int_distribution<
typename std::underlying_type<EnumClass>::type>
dist;
while (true) {
EnumClass variate = static_cast<EnumClass>(dist(*GetSharedRng()));
if (variate != EnumClass::kEmpty && variate != EnumClass::kDeleted)
return static_cast<EnumClass>(variate);
}
}
};
template <>
struct Generator<std::string> {
std::string operator()() const;
};
template <>
struct Generator<absl::string_view> {
absl::string_view operator()() const;
};
template <>
struct Generator<NonStandardLayout> {
NonStandardLayout operator()() const {
return NonStandardLayout(Generator<std::string>()());
}
};
template <class K, class V>
struct Generator<std::pair<K, V>> {
std::pair<K, V> operator()() const {
return std::pair<K, V>(Generator<typename std::decay<K>::type>()(),
Generator<typename std::decay<V>::type>()());
}
};
template <class... Ts>
struct Generator<std::tuple<Ts...>> {
std::tuple<Ts...> operator()() const {
return std::tuple<Ts...>(Generator<typename std::decay<Ts>::type>()()...);
}
};
template <class U>
struct Generator<U, absl::void_t<decltype(std::declval<U&>().key()),
decltype(std::declval<U&>().value())>>
: Generator<std::pair<
typename std::decay<decltype(std::declval<U&>().key())>::type,
typename std::decay<decltype(std::declval<U&>().value())>::type>> {};
template <class Container>
using GeneratedType = decltype(
std::declval<const Generator<
typename std::conditional<generator_internal::IsMap<Container>::value,
typename Container::value_type,
typename Container::key_type>::type>&>()());
} // namespace hash_internal
} // namespace container_internal
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_CONTAINER_INTERNAL_HASH_GENERATOR_TESTING_H_

@ -0,0 +1,184 @@
// Copyright 2018 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// Utilities to help tests verify that hash tables properly handle stateful
// allocators and hash functions.
#ifndef ABSL_CONTAINER_INTERNAL_HASH_POLICY_TESTING_H_
#define ABSL_CONTAINER_INTERNAL_HASH_POLICY_TESTING_H_
#include <cstdlib>
#include <limits>
#include <memory>
#include <ostream>
#include <type_traits>
#include <utility>
#include <vector>
#include "absl/hash/hash.h"
#include "absl/strings/string_view.h"
namespace absl {
inline namespace lts_2018_12_18 {
namespace container_internal {
namespace hash_testing_internal {
template <class Derived>
struct WithId {
WithId() : id_(next_id<Derived>()) {}
WithId(const WithId& that) : id_(that.id_) {}
WithId(WithId&& that) : id_(that.id_) { that.id_ = 0; }
WithId& operator=(const WithId& that) {
id_ = that.id_;
return *this;
}
WithId& operator=(WithId&& that) {
id_ = that.id_;
that.id_ = 0;
return *this;
}
size_t id() const { return id_; }
friend bool operator==(const WithId& a, const WithId& b) {
return a.id_ == b.id_;
}
friend bool operator!=(const WithId& a, const WithId& b) { return !(a == b); }
protected:
explicit WithId(size_t id) : id_(id) {}
private:
size_t id_;
template <class T>
static size_t next_id() {
// 0 is reserved for moved from state.
static size_t gId = 1;
return gId++;
}
};
} // namespace hash_testing_internal
struct NonStandardLayout {
NonStandardLayout() {}
explicit NonStandardLayout(std::string s) : value(std::move(s)) {}
virtual ~NonStandardLayout() {}
friend bool operator==(const NonStandardLayout& a,
const NonStandardLayout& b) {
return a.value == b.value;
}
friend bool operator!=(const NonStandardLayout& a,
const NonStandardLayout& b) {
return a.value != b.value;
}
template <typename H>
friend H AbslHashValue(H h, const NonStandardLayout& v) {
return H::combine(std::move(h), v.value);
}
std::string value;
};
struct StatefulTestingHash
: absl::container_internal::hash_testing_internal::WithId<
StatefulTestingHash> {
template <class T>
size_t operator()(const T& t) const {
return absl::Hash<T>{}(t);
}
};
struct StatefulTestingEqual
: absl::container_internal::hash_testing_internal::WithId<
StatefulTestingEqual> {
template <class T, class U>
bool operator()(const T& t, const U& u) const {
return t == u;
}
};
// It is expected that Alloc() == Alloc() for all allocators so we cannot use
// WithId base. We need to explicitly assign ids.
template <class T = int>
struct Alloc : std::allocator<T> {
using propagate_on_container_swap = std::true_type;
// Using old paradigm for this to ensure compatibility.
explicit Alloc(size_t id = 0) : id_(id) {}
Alloc(const Alloc&) = default;
Alloc& operator=(const Alloc&) = default;
template <class U>
Alloc(const Alloc<U>& that) : std::allocator<T>(that), id_(that.id()) {}
template <class U>
struct rebind {
using other = Alloc<U>;
};
size_t id() const { return id_; }
friend bool operator==(const Alloc& a, const Alloc& b) {
return a.id_ == b.id_;
}
friend bool operator!=(const Alloc& a, const Alloc& b) { return !(a == b); }
private:
size_t id_ = (std::numeric_limits<size_t>::max)();
};
template <class Map>
auto items(const Map& m) -> std::vector<
std::pair<typename Map::key_type, typename Map::mapped_type>> {
using std::get;
std::vector<std::pair<typename Map::key_type, typename Map::mapped_type>> res;
res.reserve(m.size());
for (const auto& v : m) res.emplace_back(get<0>(v), get<1>(v));
return res;
}
template <class Set>
auto keys(const Set& s)
-> std::vector<typename std::decay<typename Set::key_type>::type> {
std::vector<typename std::decay<typename Set::key_type>::type> res;
res.reserve(s.size());
for (const auto& v : s) res.emplace_back(v);
return res;
}
} // namespace container_internal
} // inline namespace lts_2018_12_18
} // namespace absl
// ABSL_UNORDERED_SUPPORTS_ALLOC_CTORS is false for glibcxx versions
// where the unordered containers are missing certain constructors that
// take allocator arguments. This test is defined ad-hoc for the platforms
// we care about (notably Crosstool 17) because libstdcxx's useless
// versioning scheme precludes a more principled solution.
// From GCC-4.9 Changelog: (src: https://gcc.gnu.org/gcc-4.9/changes.html)
// "the unordered associative containers in <unordered_map> and <unordered_set>
// meet the allocator-aware container requirements;"
#if (defined(__GLIBCXX__) && __GLIBCXX__ <= 20140425 ) || \
( __GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR__ < 9 ))
#define ABSL_UNORDERED_SUPPORTS_ALLOC_CTORS 0
#else
#define ABSL_UNORDERED_SUPPORTS_ALLOC_CTORS 1
#endif
#endif // ABSL_CONTAINER_INTERNAL_HASH_POLICY_TESTING_H_

@ -0,0 +1,45 @@
// Copyright 2018 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "absl/container/internal/hash_policy_testing.h"
#include "gtest/gtest.h"
namespace absl {
inline namespace lts_2018_12_18 {
namespace container_internal {
namespace {
TEST(_, Hash) {
StatefulTestingHash h1;
EXPECT_EQ(1, h1.id());
StatefulTestingHash h2;
EXPECT_EQ(2, h2.id());
StatefulTestingHash h1c(h1);
EXPECT_EQ(1, h1c.id());
StatefulTestingHash h2m(std::move(h2));
EXPECT_EQ(2, h2m.id());
EXPECT_EQ(0, h2.id());
StatefulTestingHash h3;
EXPECT_EQ(3, h3.id());
h3 = StatefulTestingHash();
EXPECT_EQ(4, h3.id());
h3 = std::move(h1);
EXPECT_EQ(1, h3.id());
}
} // namespace
} // namespace container_internal
} // inline namespace lts_2018_12_18
} // namespace absl

@ -0,0 +1,191 @@
// Copyright 2018 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#ifndef ABSL_CONTAINER_INTERNAL_HASH_POLICY_TRAITS_H_
#define ABSL_CONTAINER_INTERNAL_HASH_POLICY_TRAITS_H_
#include <cstddef>
#include <memory>
#include <type_traits>
#include <utility>
#include "absl/meta/type_traits.h"
namespace absl {
inline namespace lts_2018_12_18 {
namespace container_internal {
// Defines how slots are initialized/destroyed/moved.
template <class Policy, class = void>
struct hash_policy_traits {
private:
struct ReturnKey {
// We return `Key` here.
// When Key=T&, we forward the lvalue reference.
// When Key=T, we return by value to avoid a dangling reference.
// eg, for string_hash_map.
template <class Key, class... Args>
Key operator()(Key&& k, const Args&...) const {
return std::forward<Key>(k);
}
};
template <class P = Policy, class = void>
struct ConstantIteratorsImpl : std::false_type {};
template <class P>
struct ConstantIteratorsImpl<P, absl::void_t<typename P::constant_iterators>>
: P::constant_iterators {};
public:
// The actual object stored in the hash table.
using slot_type = typename Policy::slot_type;
// The type of the keys stored in the hashtable.
using key_type = typename Policy::key_type;
// The argument type for insertions into the hashtable. This is different
// from value_type for increased performance. See initializer_list constructor
// and insert() member functions for more details.
using init_type = typename Policy::init_type;
using reference = decltype(Policy::element(std::declval<slot_type*>()));
using pointer = typename std::remove_reference<reference>::type*;
using value_type = typename std::remove_reference<reference>::type;
// Policies can set this variable to tell raw_hash_set that all iterators
// should be constant, even `iterator`. This is useful for set-like
// containers.
// Defaults to false if not provided by the policy.
using constant_iterators = ConstantIteratorsImpl<>;
// PRECONDITION: `slot` is UNINITIALIZED
// POSTCONDITION: `slot` is INITIALIZED
template <class Alloc, class... Args>
static void construct(Alloc* alloc, slot_type* slot, Args&&... args) {
Policy::construct(alloc, slot, std::forward<Args>(args)...);
}
// PRECONDITION: `slot` is INITIALIZED
// POSTCONDITION: `slot` is UNINITIALIZED
template <class Alloc>
static void destroy(Alloc* alloc, slot_type* slot) {
Policy::destroy(alloc, slot);
}
// Transfers the `old_slot` to `new_slot`. Any memory allocated by the
// allocator inside `old_slot` to `new_slot` can be transferred.
//
// OPTIONAL: defaults to:
//
// clone(new_slot, std::move(*old_slot));
// destroy(old_slot);
//
// PRECONDITION: `new_slot` is UNINITIALIZED and `old_slot` is INITIALIZED
// POSTCONDITION: `new_slot` is INITIALIZED and `old_slot` is
// UNINITIALIZED
template <class Alloc>
static void transfer(Alloc* alloc, slot_type* new_slot, slot_type* old_slot) {
transfer_impl(alloc, new_slot, old_slot, 0);
}
// PRECONDITION: `slot` is INITIALIZED
// POSTCONDITION: `slot` is INITIALIZED
template <class P = Policy>
static auto element(slot_type* slot) -> decltype(P::element(slot)) {
return P::element(slot);
}
// Returns the amount of memory owned by `slot`, exclusive of `sizeof(*slot)`.
//
// If `slot` is nullptr, returns the constant amount of memory owned by any
// full slot or -1 if slots own variable amounts of memory.
//
// PRECONDITION: `slot` is INITIALIZED or nullptr
template <class P = Policy>
static size_t space_used(const slot_type* slot) {
return P::space_used(slot);
}
// Provides generalized access to the key for elements, both for elements in
// the table and for elements that have not yet been inserted (or even
// constructed). We would like an API that allows us to say: `key(args...)`
// but we cannot do that for all cases, so we use this more general API that
// can be used for many things, including the following:
//
// - Given an element in a table, get its key.
// - Given an element initializer, get its key.
// - Given `emplace()` arguments, get the element key.
//
// Implementations of this must adhere to a very strict technical
// specification around aliasing and consuming arguments:
//
// Let `value_type` be the result type of `element()` without ref- and
// cv-qualifiers. The first argument is a functor, the rest are constructor
// arguments for `value_type`. Returns `std::forward<F>(f)(k, xs...)`, where
// `k` is the element key, and `xs...` are the new constructor arguments for
// `value_type`. It's allowed for `k` to alias `xs...`, and for both to alias
// `ts...`. The key won't be touched once `xs...` are used to construct an
// element; `ts...` won't be touched at all, which allows `apply()` to consume
// any rvalues among them.
//
// If `value_type` is constructible from `Ts&&...`, `Policy::apply()` must not
// trigger a hard compile error unless it originates from `f`. In other words,
// `Policy::apply()` must be SFINAE-friendly. If `value_type` is not
// constructible from `Ts&&...`, either SFINAE or a hard compile error is OK.
//
// If `Ts...` is `[cv] value_type[&]` or `[cv] init_type[&]`,
// `Policy::apply()` must work. A compile error is not allowed, SFINAE or not.
template <class F, class... Ts, class P = Policy>
static auto apply(F&& f, Ts&&... ts)
-> decltype(P::apply(std::forward<F>(f), std::forward<Ts>(ts)...)) {
return P::apply(std::forward<F>(f), std::forward<Ts>(ts)...);
}
// Returns the "key" portion of the slot.
// Used for node handle manipulation.
template <class P = Policy>
static auto key(slot_type* slot)
-> decltype(P::apply(ReturnKey(), element(slot))) {
return P::apply(ReturnKey(), element(slot));
}
// Returns the "value" (as opposed to the "key") portion of the element. Used
// by maps to implement `operator[]`, `at()` and `insert_or_assign()`.
template <class T, class P = Policy>
static auto value(T* elem) -> decltype(P::value(elem)) {
return P::value(elem);
}
private:
// Use auto -> decltype as an enabler.
template <class Alloc, class P = Policy>
static auto transfer_impl(Alloc* alloc, slot_type* new_slot,
slot_type* old_slot, int)
-> decltype((void)P::transfer(alloc, new_slot, old_slot)) {
P::transfer(alloc, new_slot, old_slot);
}
template <class Alloc>
static void transfer_impl(Alloc* alloc, slot_type* new_slot,
slot_type* old_slot, char) {
construct(alloc, new_slot, std::move(element(old_slot)));
destroy(alloc, old_slot);
}
};
} // namespace container_internal
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_CONTAINER_INTERNAL_HASH_POLICY_TRAITS_H_

@ -0,0 +1,144 @@
// Copyright 2018 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "absl/container/internal/hash_policy_traits.h"
#include <functional>
#include <memory>
#include <new>
#include "gmock/gmock.h"
#include "gtest/gtest.h"
namespace absl {
inline namespace lts_2018_12_18 {
namespace container_internal {
namespace {
using ::testing::MockFunction;
using ::testing::Return;
using ::testing::ReturnRef;
using Alloc = std::allocator<int>;
using Slot = int;
struct PolicyWithoutOptionalOps {
using slot_type = Slot;
using key_type = Slot;
using init_type = Slot;
static std::function<void(void*, Slot*, Slot)> construct;
static std::function<void(void*, Slot*)> destroy;
static std::function<Slot&(Slot*)> element;
static int apply(int v) { return apply_impl(v); }
static std::function<int(int)> apply_impl;
static std::function<Slot&(Slot*)> value;
};
std::function<void(void*, Slot*, Slot)> PolicyWithoutOptionalOps::construct;
std::function<void(void*, Slot*)> PolicyWithoutOptionalOps::destroy;
std::function<Slot&(Slot*)> PolicyWithoutOptionalOps::element;
std::function<int(int)> PolicyWithoutOptionalOps::apply_impl;
std::function<Slot&(Slot*)> PolicyWithoutOptionalOps::value;
struct PolicyWithOptionalOps : PolicyWithoutOptionalOps {
static std::function<void(void*, Slot*, Slot*)> transfer;
};
std::function<void(void*, Slot*, Slot*)> PolicyWithOptionalOps::transfer;
struct Test : ::testing::Test {
Test() {
PolicyWithoutOptionalOps::construct = [&](void* a1, Slot* a2, Slot a3) {
construct.Call(a1, a2, std::move(a3));
};
PolicyWithoutOptionalOps::destroy = [&](void* a1, Slot* a2) {
destroy.Call(a1, a2);
};
PolicyWithoutOptionalOps::element = [&](Slot* a1) -> Slot& {
return element.Call(a1);
};
PolicyWithoutOptionalOps::apply_impl = [&](int a1) -> int {
return apply.Call(a1);
};
PolicyWithoutOptionalOps::value = [&](Slot* a1) -> Slot& {
return value.Call(a1);
};
PolicyWithOptionalOps::transfer = [&](void* a1, Slot* a2, Slot* a3) {
return transfer.Call(a1, a2, a3);
};
}
std::allocator<int> alloc;
int a = 53;
MockFunction<void(void*, Slot*, Slot)> construct;
MockFunction<void(void*, Slot*)> destroy;
MockFunction<Slot&(Slot*)> element;
MockFunction<int(int)> apply;
MockFunction<Slot&(Slot*)> value;
MockFunction<void(void*, Slot*, Slot*)> transfer;
};
TEST_F(Test, construct) {
EXPECT_CALL(construct, Call(&alloc, &a, 53));
hash_policy_traits<PolicyWithoutOptionalOps>::construct(&alloc, &a, 53);
}
TEST_F(Test, destroy) {
EXPECT_CALL(destroy, Call(&alloc, &a));
hash_policy_traits<PolicyWithoutOptionalOps>::destroy(&alloc, &a);
}
TEST_F(Test, element) {
int b = 0;
EXPECT_CALL(element, Call(&a)).WillOnce(ReturnRef(b));
EXPECT_EQ(&b, &hash_policy_traits<PolicyWithoutOptionalOps>::element(&a));
}
TEST_F(Test, apply) {
EXPECT_CALL(apply, Call(42)).WillOnce(Return(1337));
EXPECT_EQ(1337, (hash_policy_traits<PolicyWithoutOptionalOps>::apply(42)));
}
TEST_F(Test, value) {
int b = 0;
EXPECT_CALL(value, Call(&a)).WillOnce(ReturnRef(b));
EXPECT_EQ(&b, &hash_policy_traits<PolicyWithoutOptionalOps>::value(&a));
}
TEST_F(Test, without_transfer) {
int b = 42;
EXPECT_CALL(element, Call(&b)).WillOnce(::testing::ReturnRef(b));
EXPECT_CALL(construct, Call(&alloc, &a, b));
EXPECT_CALL(destroy, Call(&alloc, &b));
hash_policy_traits<PolicyWithoutOptionalOps>::transfer(&alloc, &a, &b);
}
TEST_F(Test, with_transfer) {
int b = 42;
EXPECT_CALL(transfer, Call(&alloc, &a, &b));
hash_policy_traits<PolicyWithOptionalOps>::transfer(&alloc, &a, &b);
}
} // namespace
} // namespace container_internal
} // inline namespace lts_2018_12_18
} // namespace absl

@ -0,0 +1,110 @@
// Copyright 2018 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// This library provides APIs to debug the probing behavior of hash tables.
//
// In general, the probing behavior is a black box for users and only the
// side effects can be measured in the form of performance differences.
// These APIs give a glimpse on the actual behavior of the probing algorithms in
// these hashtables given a specified hash function and a set of elements.
//
// The probe count distribution can be used to assess the quality of the hash
// function for that particular hash table. Note that a hash function that
// performs well in one hash table implementation does not necessarily performs
// well in a different one.
//
// This library supports std::unordered_{set,map}, dense_hash_{set,map} and
// absl::{flat,node,string}_hash_{set,map}.
#ifndef ABSL_CONTAINER_INTERNAL_HASHTABLE_DEBUG_H_
#define ABSL_CONTAINER_INTERNAL_HASHTABLE_DEBUG_H_
#include <cstddef>
#include <algorithm>
#include <type_traits>
#include <vector>
#include "absl/container/internal/hashtable_debug_hooks.h"
namespace absl {
inline namespace lts_2018_12_18 {
namespace container_internal {
// Returns the number of probes required to lookup `key`. Returns 0 for a
// search with no collisions. Higher values mean more hash collisions occurred;
// however, the exact meaning of this number varies according to the container
// type.
template <typename C>
size_t GetHashtableDebugNumProbes(
const C& c, const typename C::key_type& key) {
return absl::container_internal::hashtable_debug_internal::
HashtableDebugAccess<C>::GetNumProbes(c, key);
}
// Gets a histogram of the number of probes for each elements in the container.
// The sum of all the values in the vector is equal to container.size().
template <typename C>
std::vector<size_t> GetHashtableDebugNumProbesHistogram(const C& container) {
std::vector<size_t> v;
for (auto it = container.begin(); it != container.end(); ++it) {
size_t num_probes = GetHashtableDebugNumProbes(
container,
absl::container_internal::hashtable_debug_internal::GetKey<C>(*it, 0));
v.resize(std::max(v.size(), num_probes + 1));
v[num_probes]++;
}
return v;
}
struct HashtableDebugProbeSummary {
size_t total_elements;
size_t total_num_probes;
double mean;
};
// Gets a summary of the probe count distribution for the elements in the
// container.
template <typename C>
HashtableDebugProbeSummary GetHashtableDebugProbeSummary(const C& container) {
auto probes = GetHashtableDebugNumProbesHistogram(container);
HashtableDebugProbeSummary summary = {};
for (size_t i = 0; i < probes.size(); ++i) {
summary.total_elements += probes[i];
summary.total_num_probes += probes[i] * i;
}
summary.mean = 1.0 * summary.total_num_probes / summary.total_elements;
return summary;
}
// Returns the number of bytes requested from the allocator by the container
// and not freed.
template <typename C>
size_t AllocatedByteSize(const C& c) {
return absl::container_internal::hashtable_debug_internal::
HashtableDebugAccess<C>::AllocatedByteSize(c);
}
// Returns a tight lower bound for AllocatedByteSize(c) where `c` is of type `C`
// and `c.size()` is equal to `num_elements`.
template <typename C>
size_t LowerBoundAllocatedByteSize(size_t num_elements) {
return absl::container_internal::hashtable_debug_internal::
HashtableDebugAccess<C>::LowerBoundAllocatedByteSize(num_elements);
}
} // namespace container_internal
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_CONTAINER_INTERNAL_HASHTABLE_DEBUG_H_

@ -0,0 +1,83 @@
// Copyright 2018 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// Provides the internal API for hashtable_debug.h.
#ifndef ABSL_CONTAINER_INTERNAL_HASHTABLE_DEBUG_HOOKS_H_
#define ABSL_CONTAINER_INTERNAL_HASHTABLE_DEBUG_HOOKS_H_
#include <cstddef>
#include <algorithm>
#include <type_traits>
#include <vector>
namespace absl {
inline namespace lts_2018_12_18 {
namespace container_internal {
namespace hashtable_debug_internal {
// If it is a map, call get<0>().
using std::get;
template <typename T, typename = typename T::mapped_type>
auto GetKey(const typename T::value_type& pair, int) -> decltype(get<0>(pair)) {
return get<0>(pair);
}
// If it is not a map, return the value directly.
template <typename T>
const typename T::key_type& GetKey(const typename T::key_type& key, char) {
return key;
}
// Containers should specialize this to provide debug information for that
// container.
template <class Container, typename Enabler = void>
struct HashtableDebugAccess {
// Returns the number of probes required to find `key` in `c`. The "number of
// probes" is a concept that can vary by container. Implementations should
// return 0 when `key` was found in the minimum number of operations and
// should increment the result for each non-trivial operation required to find
// `key`.
//
// The default implementation uses the bucket api from the standard and thus
// works for `std::unordered_*` containers.
static size_t GetNumProbes(const Container& c,
const typename Container::key_type& key) {
if (!c.bucket_count()) return {};
size_t num_probes = 0;
size_t bucket = c.bucket(key);
for (auto it = c.begin(bucket), e = c.end(bucket);; ++it, ++num_probes) {
if (it == e) return num_probes;
if (c.key_eq()(key, GetKey<Container>(*it, 0))) return num_probes;
}
}
// Returns the number of bytes requested from the allocator by the container
// and not freed.
//
// static size_t AllocatedByteSize(const Container& c);
// Returns a tight lower bound for AllocatedByteSize(c) where `c` is of type
// `Container` and `c.size()` is equal to `num_elements`.
//
// static size_t LowerBoundAllocatedByteSize(size_t num_elements);
};
} // namespace hashtable_debug_internal
} // namespace container_internal
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_CONTAINER_INTERNAL_HASHTABLE_DEBUG_HOOKS_H_

@ -0,0 +1,740 @@
// Copyright 2018 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// MOTIVATION AND TUTORIAL
//
// If you want to put in a single heap allocation N doubles followed by M ints,
// it's easy if N and M are known at compile time.
//
// struct S {
// double a[N];
// int b[M];
// };
//
// S* p = new S;
//
// But what if N and M are known only in run time? Class template Layout to the
// rescue! It's a portable generalization of the technique known as struct hack.
//
// // This object will tell us everything we need to know about the memory
// // layout of double[N] followed by int[M]. It's structurally identical to
// // size_t[2] that stores N and M. It's very cheap to create.
// const Layout<double, int> layout(N, M);
//
// // Allocate enough memory for both arrays. `AllocSize()` tells us how much
// // memory is needed. We are free to use any allocation function we want as
// // long as it returns aligned memory.
// std::unique_ptr<unsigned char[]> p(new unsigned char[layout.AllocSize()]);
//
// // Obtain the pointer to the array of doubles.
// // Equivalent to `reinterpret_cast<double*>(p.get())`.
// //
// // We could have written layout.Pointer<0>(p) instead. If all the types are
// // unique you can use either form, but if some types are repeated you must
// // use the index form.
// double* a = layout.Pointer<double>(p.get());
//
// // Obtain the pointer to the array of ints.
// // Equivalent to `reinterpret_cast<int*>(p.get() + N * 8)`.
// int* b = layout.Pointer<int>(p);
//
// If we are unable to specify sizes of all fields, we can pass as many sizes as
// we can to `Partial()`. In return, it'll allow us to access the fields whose
// locations and sizes can be computed from the provided information.
// `Partial()` comes in handy when the array sizes are embedded into the
// allocation.
//
// // size_t[1] containing N, size_t[1] containing M, double[N], int[M].
// using L = Layout<size_t, size_t, double, int>;
//
// unsigned char* Allocate(size_t n, size_t m) {
// const L layout(1, 1, n, m);
// unsigned char* p = new unsigned char[layout.AllocSize()];
// *layout.Pointer<0>(p) = n;
// *layout.Pointer<1>(p) = m;
// return p;
// }
//
// void Use(unsigned char* p) {
// // First, extract N and M.
// // Specify that the first array has only one element. Using `prefix` we
// // can access the first two arrays but not more.
// constexpr auto prefix = L::Partial(1);
// size_t n = *prefix.Pointer<0>(p);
// size_t m = *prefix.Pointer<1>(p);
//
// // Now we can get pointers to the payload.
// const L layout(1, 1, n, m);
// double* a = layout.Pointer<double>(p);
// int* b = layout.Pointer<int>(p);
// }
//
// The layout we used above combines fixed-size with dynamically-sized fields.
// This is quite common. Layout is optimized for this use case and generates
// optimal code. All computations that can be performed at compile time are
// indeed performed at compile time.
//
// Efficiency tip: The order of fields matters. In `Layout<T1, ..., TN>` try to
// ensure that `alignof(T1) >= ... >= alignof(TN)`. This way you'll have no
// padding in between arrays.
//
// You can manually override the alignment of an array by wrapping the type in
// `Aligned<T, N>`. `Layout<..., Aligned<T, N>, ...>` has exactly the same API
// and behavior as `Layout<..., T, ...>` except that the first element of the
// array of `T` is aligned to `N` (the rest of the elements follow without
// padding). `N` cannot be less than `alignof(T)`.
//
// `AllocSize()` and `Pointer()` are the most basic methods for dealing with
// memory layouts. Check out the reference or code below to discover more.
//
// EXAMPLE
//
// // Immutable move-only string with sizeof equal to sizeof(void*). The
// // string size and the characters are kept in the same heap allocation.
// class CompactString {
// public:
// CompactString(const char* s = "") {
// const size_t size = strlen(s);
// // size_t[1] followed by char[size + 1].
// const L layout(1, size + 1);
// p_.reset(new unsigned char[layout.AllocSize()]);
// // If running under ASAN, mark the padding bytes, if any, to catch
// // memory errors.
// layout.PoisonPadding(p_.get());
// // Store the size in the allocation.
// *layout.Pointer<size_t>(p_.get()) = size;
// // Store the characters in the allocation.
// memcpy(layout.Pointer<char>(p_.get()), s, size + 1);
// }
//
// size_t size() const {
// // Equivalent to reinterpret_cast<size_t&>(*p).
// return *L::Partial().Pointer<size_t>(p_.get());
// }
//
// const char* c_str() const {
// // Equivalent to reinterpret_cast<char*>(p.get() + sizeof(size_t)).
// // The argument in Partial(1) specifies that we have size_t[1] in front
// // of the characters.
// return L::Partial(1).Pointer<char>(p_.get());
// }
//
// private:
// // Our heap allocation contains a size_t followed by an array of chars.
// using L = Layout<size_t, char>;
// std::unique_ptr<unsigned char[]> p_;
// };
//
// int main() {
// CompactString s = "hello";
// assert(s.size() == 5);
// assert(strcmp(s.c_str(), "hello") == 0);
// }
//
// DOCUMENTATION
//
// The interface exported by this file consists of:
// - class `Layout<>` and its public members.
// - The public members of class `internal_layout::LayoutImpl<>`. That class
// isn't intended to be used directly, and its name and template parameter
// list are internal implementation details, but the class itself provides
// most of the functionality in this file. See comments on its members for
// detailed documentation.
//
// `Layout<T1,... Tn>::Partial(count1,..., countm)` (where `m` <= `n`) returns a
// `LayoutImpl<>` object. `Layout<T1,..., Tn> layout(count1,..., countn)`
// creates a `Layout` object, which exposes the same functionality by inheriting
// from `LayoutImpl<>`.
#ifndef ABSL_CONTAINER_INTERNAL_LAYOUT_H_
#define ABSL_CONTAINER_INTERNAL_LAYOUT_H_
#include <assert.h>
#include <stddef.h>
#include <stdint.h>
#include <ostream>
#include <string>
#include <tuple>
#include <type_traits>
#include <typeinfo>
#include <utility>
#ifdef ADDRESS_SANITIZER
#include <sanitizer/asan_interface.h>
#endif
#include "absl/meta/type_traits.h"
#include "absl/strings/str_cat.h"
#include "absl/types/span.h"
#include "absl/utility/utility.h"
#if defined(__GXX_RTTI)
#define ABSL_INTERNAL_HAS_CXA_DEMANGLE
#endif
#ifdef ABSL_INTERNAL_HAS_CXA_DEMANGLE
#include <cxxabi.h>
#endif
namespace absl {
inline namespace lts_2018_12_18 {
namespace container_internal {
// A type wrapper that instructs `Layout` to use the specific alignment for the
// array. `Layout<..., Aligned<T, N>, ...>` has exactly the same API
// and behavior as `Layout<..., T, ...>` except that the first element of the
// array of `T` is aligned to `N` (the rest of the elements follow without
// padding).
//
// Requires: `N >= alignof(T)` and `N` is a power of 2.
template <class T, size_t N>
struct Aligned;
namespace internal_layout {
template <class T>
struct NotAligned {};
template <class T, size_t N>
struct NotAligned<const Aligned<T, N>> {
static_assert(sizeof(T) == 0, "Aligned<T, N> cannot be const-qualified");
};
template <size_t>
using IntToSize = size_t;
template <class>
using TypeToSize = size_t;
template <class T>
struct Type : NotAligned<T> {
using type = T;
};
template <class T, size_t N>
struct Type<Aligned<T, N>> {
using type = T;
};
template <class T>
struct SizeOf : NotAligned<T>, std::integral_constant<size_t, sizeof(T)> {};
template <class T, size_t N>
struct SizeOf<Aligned<T, N>> : std::integral_constant<size_t, sizeof(T)> {};
// Note: workaround for https://gcc.gnu.org/PR88115
template <class T>
struct AlignOf : NotAligned<T> {
static constexpr size_t value = alignof(T);
};
template <class T, size_t N>
struct AlignOf<Aligned<T, N>> {
static_assert(N % alignof(T) == 0,
"Custom alignment can't be lower than the type's alignment");
static constexpr size_t value = N;
};
// Does `Ts...` contain `T`?
template <class T, class... Ts>
using Contains = absl::disjunction<std::is_same<T, Ts>...>;
template <class From, class To>
using CopyConst =
typename std::conditional<std::is_const<From>::value, const To, To>::type;
// Note: We're not qualifying this with absl:: because it doesn't compile under
// MSVC.
template <class T>
using SliceType = Span<T>;
// This namespace contains no types. It prevents functions defined in it from
// being found by ADL.
namespace adl_barrier {
template <class Needle, class... Ts>
constexpr size_t Find(Needle, Needle, Ts...) {
static_assert(!Contains<Needle, Ts...>(), "Duplicate element type");
return 0;
}
template <class Needle, class T, class... Ts>
constexpr size_t Find(Needle, T, Ts...) {
return adl_barrier::Find(Needle(), Ts()...) + 1;
}
constexpr bool IsPow2(size_t n) { return !(n & (n - 1)); }
// Returns `q * m` for the smallest `q` such that `q * m >= n`.
// Requires: `m` is a power of two. It's enforced by IsLegalElementType below.
constexpr size_t Align(size_t n, size_t m) { return (n + m - 1) & ~(m - 1); }
constexpr size_t Min(size_t a, size_t b) { return b < a ? b : a; }
constexpr size_t Max(size_t a) { return a; }
template <class... Ts>
constexpr size_t Max(size_t a, size_t b, Ts... rest) {
return adl_barrier::Max(b < a ? a : b, rest...);
}
template <class T>
std::string TypeName() {
std::string out;
int status = 0;
char* demangled = nullptr;
#ifdef ABSL_INTERNAL_HAS_CXA_DEMANGLE
demangled = abi::__cxa_demangle(typeid(T).name(), nullptr, nullptr, &status);
#endif
if (status == 0 && demangled != nullptr) { // Demangling succeeded.
absl::StrAppend(&out, "<", demangled, ">");
free(demangled);
} else {
#if defined(__GXX_RTTI) || defined(_CPPRTTI)
absl::StrAppend(&out, "<", typeid(T).name(), ">");
#endif
}
return out;
}
} // namespace adl_barrier
template <bool C>
using EnableIf = typename std::enable_if<C, int>::type;
// Can `T` be a template argument of `Layout`?
template <class T>
using IsLegalElementType = std::integral_constant<
bool, !std::is_reference<T>::value && !std::is_volatile<T>::value &&
!std::is_reference<typename Type<T>::type>::value &&
!std::is_volatile<typename Type<T>::type>::value &&
adl_barrier::IsPow2(AlignOf<T>::value)>;
template <class Elements, class SizeSeq, class OffsetSeq>
class LayoutImpl;
// Public base class of `Layout` and the result type of `Layout::Partial()`.
//
// `Elements...` contains all template arguments of `Layout` that created this
// instance.
//
// `SizeSeq...` is `[0, NumSizes)` where `NumSizes` is the number of arguments
// passed to `Layout::Partial()` or `Layout::Layout()`.
//
// `OffsetSeq...` is `[0, NumOffsets)` where `NumOffsets` is
// `Min(sizeof...(Elements), NumSizes + 1)` (the number of arrays for which we
// can compute offsets).
template <class... Elements, size_t... SizeSeq, size_t... OffsetSeq>
class LayoutImpl<std::tuple<Elements...>, absl::index_sequence<SizeSeq...>,
absl::index_sequence<OffsetSeq...>> {
private:
static_assert(sizeof...(Elements) > 0, "At least one field is required");
static_assert(absl::conjunction<IsLegalElementType<Elements>...>::value,
"Invalid element type (see IsLegalElementType)");
enum {
NumTypes = sizeof...(Elements),
NumSizes = sizeof...(SizeSeq),
NumOffsets = sizeof...(OffsetSeq),
};
// These are guaranteed by `Layout`.
static_assert(NumOffsets == adl_barrier::Min(NumTypes, NumSizes + 1),
"Internal error");
static_assert(NumTypes > 0, "Internal error");
// Returns the index of `T` in `Elements...`. Results in a compilation error
// if `Elements...` doesn't contain exactly one instance of `T`.
template <class T>
static constexpr size_t ElementIndex() {
static_assert(Contains<Type<T>, Type<typename Type<Elements>::type>...>(),
"Type not found");
return adl_barrier::Find(Type<T>(),
Type<typename Type<Elements>::type>()...);
}
template <size_t N>
using ElementAlignment =
AlignOf<typename std::tuple_element<N, std::tuple<Elements...>>::type>;
public:
// Element types of all arrays packed in a tuple.
using ElementTypes = std::tuple<typename Type<Elements>::type...>;
// Element type of the Nth array.
template <size_t N>
using ElementType = typename std::tuple_element<N, ElementTypes>::type;
constexpr explicit LayoutImpl(IntToSize<SizeSeq>... sizes)
: size_{sizes...} {}
// Alignment of the layout, equal to the strictest alignment of all elements.
// All pointers passed to the methods of layout must be aligned to this value.
static constexpr size_t Alignment() {
return adl_barrier::Max(AlignOf<Elements>::value...);
}
// Offset in bytes of the Nth array.
//
// // int[3], 4 bytes of padding, double[4].
// Layout<int, double> x(3, 4);
// assert(x.Offset<0>() == 0); // The ints starts from 0.
// assert(x.Offset<1>() == 16); // The doubles starts from 16.
//
// Requires: `N <= NumSizes && N < sizeof...(Ts)`.
template <size_t N, EnableIf<N == 0> = 0>
constexpr size_t Offset() const {
return 0;
}
template <size_t N, EnableIf<N != 0> = 0>
constexpr size_t Offset() const {
static_assert(N < NumOffsets, "Index out of bounds");
return adl_barrier::Align(
Offset<N - 1>() + SizeOf<ElementType<N - 1>>() * size_[N - 1],
ElementAlignment<N>::value);
}
// Offset in bytes of the array with the specified element type. There must
// be exactly one such array and its zero-based index must be at most
// `NumSizes`.
//
// // int[3], 4 bytes of padding, double[4].
// Layout<int, double> x(3, 4);
// assert(x.Offset<int>() == 0); // The ints starts from 0.
// assert(x.Offset<double>() == 16); // The doubles starts from 16.
template <class T>
constexpr size_t Offset() const {
return Offset<ElementIndex<T>()>();
}
// Offsets in bytes of all arrays for which the offsets are known.
constexpr std::array<size_t, NumOffsets> Offsets() const {
return {{Offset<OffsetSeq>()...}};
}
// The number of elements in the Nth array. This is the Nth argument of
// `Layout::Partial()` or `Layout::Layout()` (zero-based).
//
// // int[3], 4 bytes of padding, double[4].
// Layout<int, double> x(3, 4);
// assert(x.Size<0>() == 3);
// assert(x.Size<1>() == 4);
//
// Requires: `N < NumSizes`.
template <size_t N>
constexpr size_t Size() const {
static_assert(N < NumSizes, "Index out of bounds");
return size_[N];
}
// The number of elements in the array with the specified element type.
// There must be exactly one such array and its zero-based index must be
// at most `NumSizes`.
//
// // int[3], 4 bytes of padding, double[4].
// Layout<int, double> x(3, 4);
// assert(x.Size<int>() == 3);
// assert(x.Size<double>() == 4);
template <class T>
constexpr size_t Size() const {
return Size<ElementIndex<T>()>();
}
// The number of elements of all arrays for which they are known.
constexpr std::array<size_t, NumSizes> Sizes() const {
return {{Size<SizeSeq>()...}};
}
// Pointer to the beginning of the Nth array.
//
// `Char` must be `[const] [signed|unsigned] char`.
//
// // int[3], 4 bytes of padding, double[4].
// Layout<int, double> x(3, 4);
// unsigned char* p = new unsigned char[x.AllocSize()];
// int* ints = x.Pointer<0>(p);
// double* doubles = x.Pointer<1>(p);
//
// Requires: `N <= NumSizes && N < sizeof...(Ts)`.
// Requires: `p` is aligned to `Alignment()`.
template <size_t N, class Char>
CopyConst<Char, ElementType<N>>* Pointer(Char* p) const {
using C = typename std::remove_const<Char>::type;
static_assert(
std::is_same<C, char>() || std::is_same<C, unsigned char>() ||
std::is_same<C, signed char>(),
"The argument must be a pointer to [const] [signed|unsigned] char");
constexpr size_t alignment = Alignment();
(void)alignment;
assert(reinterpret_cast<uintptr_t>(p) % alignment == 0);
return reinterpret_cast<CopyConst<Char, ElementType<N>>*>(p + Offset<N>());
}
// Pointer to the beginning of the array with the specified element type.
// There must be exactly one such array and its zero-based index must be at
// most `NumSizes`.
//
// `Char` must be `[const] [signed|unsigned] char`.
//
// // int[3], 4 bytes of padding, double[4].
// Layout<int, double> x(3, 4);
// unsigned char* p = new unsigned char[x.AllocSize()];
// int* ints = x.Pointer<int>(p);
// double* doubles = x.Pointer<double>(p);
//
// Requires: `p` is aligned to `Alignment()`.
template <class T, class Char>
CopyConst<Char, T>* Pointer(Char* p) const {
return Pointer<ElementIndex<T>()>(p);
}
// Pointers to all arrays for which pointers are known.
//
// `Char` must be `[const] [signed|unsigned] char`.
//
// // int[3], 4 bytes of padding, double[4].
// Layout<int, double> x(3, 4);
// unsigned char* p = new unsigned char[x.AllocSize()];
//
// int* ints;
// double* doubles;
// std::tie(ints, doubles) = x.Pointers(p);
//
// Requires: `p` is aligned to `Alignment()`.
//
// Note: We're not using ElementType alias here because it does not compile
// under MSVC.
template <class Char>
std::tuple<CopyConst<
Char, typename std::tuple_element<OffsetSeq, ElementTypes>::type>*...>
Pointers(Char* p) const {
return std::tuple<CopyConst<Char, ElementType<OffsetSeq>>*...>(
Pointer<OffsetSeq>(p)...);
}
// The Nth array.
//
// `Char` must be `[const] [signed|unsigned] char`.
//
// // int[3], 4 bytes of padding, double[4].
// Layout<int, double> x(3, 4);
// unsigned char* p = new unsigned char[x.AllocSize()];
// Span<int> ints = x.Slice<0>(p);
// Span<double> doubles = x.Slice<1>(p);
//
// Requires: `N < NumSizes`.
// Requires: `p` is aligned to `Alignment()`.
template <size_t N, class Char>
SliceType<CopyConst<Char, ElementType<N>>> Slice(Char* p) const {
return SliceType<CopyConst<Char, ElementType<N>>>(Pointer<N>(p), Size<N>());
}
// The array with the specified element type. There must be exactly one
// such array and its zero-based index must be less than `NumSizes`.
//
// `Char` must be `[const] [signed|unsigned] char`.
//
// // int[3], 4 bytes of padding, double[4].
// Layout<int, double> x(3, 4);
// unsigned char* p = new unsigned char[x.AllocSize()];
// Span<int> ints = x.Slice<int>(p);
// Span<double> doubles = x.Slice<double>(p);
//
// Requires: `p` is aligned to `Alignment()`.
template <class T, class Char>
SliceType<CopyConst<Char, T>> Slice(Char* p) const {
return Slice<ElementIndex<T>()>(p);
}
// All arrays with known sizes.
//
// `Char` must be `[const] [signed|unsigned] char`.
//
// // int[3], 4 bytes of padding, double[4].
// Layout<int, double> x(3, 4);
// unsigned char* p = new unsigned char[x.AllocSize()];
//
// Span<int> ints;
// Span<double> doubles;
// std::tie(ints, doubles) = x.Slices(p);
//
// Requires: `p` is aligned to `Alignment()`.
//
// Note: We're not using ElementType alias here because it does not compile
// under MSVC.
template <class Char>
std::tuple<SliceType<CopyConst<
Char, typename std::tuple_element<SizeSeq, ElementTypes>::type>>...>
Slices(Char* p) const {
// Workaround for https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63875 (fixed
// in 6.1).
(void)p;
return std::tuple<SliceType<CopyConst<Char, ElementType<SizeSeq>>>...>(
Slice<SizeSeq>(p)...);
}
// The size of the allocation that fits all arrays.
//
// // int[3], 4 bytes of padding, double[4].
// Layout<int, double> x(3, 4);
// unsigned char* p = new unsigned char[x.AllocSize()]; // 48 bytes
//
// Requires: `NumSizes == sizeof...(Ts)`.
constexpr size_t AllocSize() const {
static_assert(NumTypes == NumSizes, "You must specify sizes of all fields");
return Offset<NumTypes - 1>() +
SizeOf<ElementType<NumTypes - 1>>() * size_[NumTypes - 1];
}
// If built with --config=asan, poisons padding bytes (if any) in the
// allocation. The pointer must point to a memory block at least
// `AllocSize()` bytes in length.
//
// `Char` must be `[const] [signed|unsigned] char`.
//
// Requires: `p` is aligned to `Alignment()`.
template <class Char, size_t N = NumOffsets - 1, EnableIf<N == 0> = 0>
void PoisonPadding(const Char* p) const {
Pointer<0>(p); // verify the requirements on `Char` and `p`
}
template <class Char, size_t N = NumOffsets - 1, EnableIf<N != 0> = 0>
void PoisonPadding(const Char* p) const {
static_assert(N < NumOffsets, "Index out of bounds");
(void)p;
#ifdef ADDRESS_SANITIZER
PoisonPadding<Char, N - 1>(p);
// The `if` is an optimization. It doesn't affect the observable behaviour.
if (ElementAlignment<N - 1>::value % ElementAlignment<N>::value) {
size_t start =
Offset<N - 1>() + SizeOf<ElementType<N - 1>>() * size_[N - 1];
ASAN_POISON_MEMORY_REGION(p + start, Offset<N>() - start);
}
#endif
}
// Human-readable description of the memory layout. Useful for debugging.
// Slow.
//
// // char[5], 3 bytes of padding, int[3], 4 bytes of padding, followed
// // by an unknown number of doubles.
// auto x = Layout<char, int, double>::Partial(5, 3);
// assert(x.DebugString() ==
// "@0<char>(1)[5]; @8<int>(4)[3]; @24<double>(8)");
//
// Each field is in the following format: @offset<type>(sizeof)[size] (<type>
// may be missing depending on the target platform). For example,
// @8<int>(4)[3] means that at offset 8 we have an array of ints, where each
// int is 4 bytes, and we have 3 of those ints. The size of the last field may
// be missing (as in the example above). Only fields with known offsets are
// described. Type names may differ across platforms: one compiler might
// produce "unsigned*" where another produces "unsigned int *".
std::string DebugString() const {
const auto offsets = Offsets();
const size_t sizes[] = {SizeOf<ElementType<OffsetSeq>>()...};
const std::string types[] = {adl_barrier::TypeName<ElementType<OffsetSeq>>()...};
std::string res = absl::StrCat("@0", types[0], "(", sizes[0], ")");
for (size_t i = 0; i != NumOffsets - 1; ++i) {
absl::StrAppend(&res, "[", size_[i], "]; @", offsets[i + 1], types[i + 1],
"(", sizes[i + 1], ")");
}
// NumSizes is a constant that may be zero. Some compilers cannot see that
// inside the if statement "size_[NumSizes - 1]" must be valid.
int last = static_cast<int>(NumSizes) - 1;
if (NumTypes == NumSizes && last >= 0) {
absl::StrAppend(&res, "[", size_[last], "]");
}
return res;
}
private:
// Arguments of `Layout::Partial()` or `Layout::Layout()`.
size_t size_[NumSizes > 0 ? NumSizes : 1];
};
template <size_t NumSizes, class... Ts>
using LayoutType = LayoutImpl<
std::tuple<Ts...>, absl::make_index_sequence<NumSizes>,
absl::make_index_sequence<adl_barrier::Min(sizeof...(Ts), NumSizes + 1)>>;
} // namespace internal_layout
// Descriptor of arrays of various types and sizes laid out in memory one after
// another. See the top of the file for documentation.
//
// Check out the public API of internal_layout::LayoutImpl above. The type is
// internal to the library but its methods are public, and they are inherited
// by `Layout`.
template <class... Ts>
class Layout : public internal_layout::LayoutType<sizeof...(Ts), Ts...> {
public:
static_assert(sizeof...(Ts) > 0, "At least one field is required");
static_assert(
absl::conjunction<internal_layout::IsLegalElementType<Ts>...>::value,
"Invalid element type (see IsLegalElementType)");
// The result type of `Partial()` with `NumSizes` arguments.
template <size_t NumSizes>
using PartialType = internal_layout::LayoutType<NumSizes, Ts...>;
// `Layout` knows the element types of the arrays we want to lay out in
// memory but not the number of elements in each array.
// `Partial(size1, ..., sizeN)` allows us to specify the latter. The
// resulting immutable object can be used to obtain pointers to the
// individual arrays.
//
// It's allowed to pass fewer array sizes than the number of arrays. E.g.,
// if all you need is to the offset of the second array, you only need to
// pass one argument -- the number of elements in the first array.
//
// // int[3] followed by 4 bytes of padding and an unknown number of
// // doubles.
// auto x = Layout<int, double>::Partial(3);
// // doubles start at byte 16.
// assert(x.Offset<1>() == 16);
//
// If you know the number of elements in all arrays, you can still call
// `Partial()` but it's more convenient to use the constructor of `Layout`.
//
// Layout<int, double> x(3, 5);
//
// Note: The sizes of the arrays must be specified in number of elements,
// not in bytes.
//
// Requires: `sizeof...(Sizes) <= sizeof...(Ts)`.
// Requires: all arguments are convertible to `size_t`.
template <class... Sizes>
static constexpr PartialType<sizeof...(Sizes)> Partial(Sizes&&... sizes) {
static_assert(sizeof...(Sizes) <= sizeof...(Ts), "");
return PartialType<sizeof...(Sizes)>(absl::forward<Sizes>(sizes)...);
}
// Creates a layout with the sizes of all arrays specified. If you know
// only the sizes of the first N arrays (where N can be zero), you can use
// `Partial()` defined above. The constructor is essentially equivalent to
// calling `Partial()` and passing in all array sizes; the constructor is
// provided as a convenient abbreviation.
//
// Note: The sizes of the arrays must be specified in number of elements,
// not in bytes.
constexpr explicit Layout(internal_layout::TypeToSize<Ts>... sizes)
: internal_layout::LayoutType<sizeof...(Ts), Ts...>(sizes...) {}
};
} // namespace container_internal
} // inline namespace lts_2018_12_18
} // namespace absl
#endif // ABSL_CONTAINER_INTERNAL_LAYOUT_H_

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save