Replace CompactValue with StyleValueHandle and StyleValuePool (#1534)
Summary:
X-link: https://github.com/facebook/react-native/pull/42131
Pull Request resolved: https://github.com/facebook/yoga/pull/1534
Now that the storage method is a hidden implementation detail, this changes the underlying data structure used to store styles, from `CompactValue` (a customized 32-bit float with tag bits), to `StyleValuePool`.
This new structure operates on 16-bit handles, and a shared small buffer. The vast majority of real-world values can be stored directly in the handle, but we allow arbitrary 32 bit (and soon 64-bit) values to be stored, where the handle then becomes an index into the styles buffer.
This results in a real-world memory usage win, while also letting us store the 64-bit values we are wanting to use for math function support (compared to doubling the storage requirements).
This does seem to make style reads slower, which due to their heavy frequency, does have a performance impact observable by synthetics. In an example laying out a tree of 10,000 nodes, we originally read from `StyleValuePool` 2.4 million times.
This originally resulted in a ~10% regression, but when combined with the changes in the last diff, most style reads become simple bitwise operations on the handle, and we are actually 14% faster than before.
| | Before | After | Δ |
| `sizeof(yoga::Style)` | 208B | 144B | -64B/-31% |
| `sizeof(yoga::Node)` | 640B | 576B | -64B/-10% |
| `sizeof(YogaLayoutableShadowNode) ` | 920B | 856B | -64B/-7% |
| `sizeof(YogaLayoutableShadowNode) + sizeof(YogaStylableProps)` | 1296B | 1168B | -128B/-10% |
| `sizeof(ViewShadowNode)` | 920B | 856B | -64B/-7% |
| `sizeof(ViewShadowNode) + sizeof(ViewShadowNodeProps)` | 2000B | 1872B | -128B/-6% |
| "Huge nested layout" microbenchmark (M1 Ultra) | 11.5ms | 9.9ms | -1.6ms/-14% |
| Quest Store C++ heap usage (avg over 10 runs) | 86.2MB | 84.9MB | -1.3MB/-1.5% |
Reviewed By: joevilches
Differential Revision: D52223122
fbshipit-source-id: 990f4b7e991e8e22d198ce20f7da66d9c6ba637b
2024-01-19 18:22:29 -08:00
|
|
|
/*
|
|
|
|
* Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
|
|
*
|
|
|
|
* This source code is licensed under the MIT license found in the
|
|
|
|
* LICENSE file in the root directory of this source tree.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#pragma once
|
|
|
|
|
|
|
|
#include <cassert>
|
|
|
|
#include <cstdint>
|
|
|
|
|
|
|
|
#include <yoga/numeric/FloatOptional.h>
|
|
|
|
#include <yoga/style/SmallValueBuffer.h>
|
|
|
|
#include <yoga/style/StyleLength.h>
|
2024-12-02 17:29:49 -08:00
|
|
|
#include <yoga/style/StyleSizeLength.h>
|
Replace CompactValue with StyleValueHandle and StyleValuePool (#1534)
Summary:
X-link: https://github.com/facebook/react-native/pull/42131
Pull Request resolved: https://github.com/facebook/yoga/pull/1534
Now that the storage method is a hidden implementation detail, this changes the underlying data structure used to store styles, from `CompactValue` (a customized 32-bit float with tag bits), to `StyleValuePool`.
This new structure operates on 16-bit handles, and a shared small buffer. The vast majority of real-world values can be stored directly in the handle, but we allow arbitrary 32 bit (and soon 64-bit) values to be stored, where the handle then becomes an index into the styles buffer.
This results in a real-world memory usage win, while also letting us store the 64-bit values we are wanting to use for math function support (compared to doubling the storage requirements).
This does seem to make style reads slower, which due to their heavy frequency, does have a performance impact observable by synthetics. In an example laying out a tree of 10,000 nodes, we originally read from `StyleValuePool` 2.4 million times.
This originally resulted in a ~10% regression, but when combined with the changes in the last diff, most style reads become simple bitwise operations on the handle, and we are actually 14% faster than before.
| | Before | After | Δ |
| `sizeof(yoga::Style)` | 208B | 144B | -64B/-31% |
| `sizeof(yoga::Node)` | 640B | 576B | -64B/-10% |
| `sizeof(YogaLayoutableShadowNode) ` | 920B | 856B | -64B/-7% |
| `sizeof(YogaLayoutableShadowNode) + sizeof(YogaStylableProps)` | 1296B | 1168B | -128B/-10% |
| `sizeof(ViewShadowNode)` | 920B | 856B | -64B/-7% |
| `sizeof(ViewShadowNode) + sizeof(ViewShadowNodeProps)` | 2000B | 1872B | -128B/-6% |
| "Huge nested layout" microbenchmark (M1 Ultra) | 11.5ms | 9.9ms | -1.6ms/-14% |
| Quest Store C++ heap usage (avg over 10 runs) | 86.2MB | 84.9MB | -1.3MB/-1.5% |
Reviewed By: joevilches
Differential Revision: D52223122
fbshipit-source-id: 990f4b7e991e8e22d198ce20f7da66d9c6ba637b
2024-01-19 18:22:29 -08:00
|
|
|
#include <yoga/style/StyleValueHandle.h>
|
|
|
|
|
|
|
|
namespace facebook::yoga {
|
|
|
|
|
|
|
|
/**
|
|
|
|
* StyleValuePool allows compact storage for a sparse collection of assigned
|
|
|
|
* lengths and numbers. Values are referred to using StyleValueHandle. In most
|
|
|
|
* cases StyleValueHandle can embed the value directly, but if not, the value is
|
|
|
|
* stored within a buffer provided by the pool. The pool contains a fixed number
|
|
|
|
* of inline slots before falling back to heap allocating additional slots.
|
|
|
|
*/
|
|
|
|
class StyleValuePool {
|
|
|
|
public:
|
|
|
|
void store(StyleValueHandle& handle, StyleLength length) {
|
|
|
|
if (length.isUndefined()) {
|
|
|
|
handle.setType(StyleValueHandle::Type::Undefined);
|
|
|
|
} else if (length.isAuto()) {
|
|
|
|
handle.setType(StyleValueHandle::Type::Auto);
|
|
|
|
} else {
|
2024-12-02 17:29:49 -08:00
|
|
|
auto type = length.isPoints() ? StyleValueHandle::Type::Point
|
|
|
|
: StyleValueHandle::Type::Percent;
|
Replace CompactValue with StyleValueHandle and StyleValuePool (#1534)
Summary:
X-link: https://github.com/facebook/react-native/pull/42131
Pull Request resolved: https://github.com/facebook/yoga/pull/1534
Now that the storage method is a hidden implementation detail, this changes the underlying data structure used to store styles, from `CompactValue` (a customized 32-bit float with tag bits), to `StyleValuePool`.
This new structure operates on 16-bit handles, and a shared small buffer. The vast majority of real-world values can be stored directly in the handle, but we allow arbitrary 32 bit (and soon 64-bit) values to be stored, where the handle then becomes an index into the styles buffer.
This results in a real-world memory usage win, while also letting us store the 64-bit values we are wanting to use for math function support (compared to doubling the storage requirements).
This does seem to make style reads slower, which due to their heavy frequency, does have a performance impact observable by synthetics. In an example laying out a tree of 10,000 nodes, we originally read from `StyleValuePool` 2.4 million times.
This originally resulted in a ~10% regression, but when combined with the changes in the last diff, most style reads become simple bitwise operations on the handle, and we are actually 14% faster than before.
| | Before | After | Δ |
| `sizeof(yoga::Style)` | 208B | 144B | -64B/-31% |
| `sizeof(yoga::Node)` | 640B | 576B | -64B/-10% |
| `sizeof(YogaLayoutableShadowNode) ` | 920B | 856B | -64B/-7% |
| `sizeof(YogaLayoutableShadowNode) + sizeof(YogaStylableProps)` | 1296B | 1168B | -128B/-10% |
| `sizeof(ViewShadowNode)` | 920B | 856B | -64B/-7% |
| `sizeof(ViewShadowNode) + sizeof(ViewShadowNodeProps)` | 2000B | 1872B | -128B/-6% |
| "Huge nested layout" microbenchmark (M1 Ultra) | 11.5ms | 9.9ms | -1.6ms/-14% |
| Quest Store C++ heap usage (avg over 10 runs) | 86.2MB | 84.9MB | -1.3MB/-1.5% |
Reviewed By: joevilches
Differential Revision: D52223122
fbshipit-source-id: 990f4b7e991e8e22d198ce20f7da66d9c6ba637b
2024-01-19 18:22:29 -08:00
|
|
|
storeValue(handle, length.value().unwrap(), type);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2024-12-02 17:29:49 -08:00
|
|
|
void store(StyleValueHandle& handle, StyleSizeLength sizeValue) {
|
|
|
|
if (sizeValue.isUndefined()) {
|
|
|
|
handle.setType(StyleValueHandle::Type::Undefined);
|
|
|
|
} else if (sizeValue.isAuto()) {
|
|
|
|
handle.setType(StyleValueHandle::Type::Auto);
|
|
|
|
} else if (sizeValue.isMaxContent()) {
|
|
|
|
storeKeyword(handle, StyleValueHandle::Keyword::MaxContent);
|
|
|
|
} else if (sizeValue.isStretch()) {
|
|
|
|
storeKeyword(handle, StyleValueHandle::Keyword::Stretch);
|
|
|
|
} else if (sizeValue.isFitContent()) {
|
|
|
|
storeKeyword(handle, StyleValueHandle::Keyword::FitContent);
|
|
|
|
} else {
|
|
|
|
auto type = sizeValue.isPoints() ? StyleValueHandle::Type::Point
|
|
|
|
: StyleValueHandle::Type::Percent;
|
|
|
|
storeValue(handle, sizeValue.value().unwrap(), type);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Replace CompactValue with StyleValueHandle and StyleValuePool (#1534)
Summary:
X-link: https://github.com/facebook/react-native/pull/42131
Pull Request resolved: https://github.com/facebook/yoga/pull/1534
Now that the storage method is a hidden implementation detail, this changes the underlying data structure used to store styles, from `CompactValue` (a customized 32-bit float with tag bits), to `StyleValuePool`.
This new structure operates on 16-bit handles, and a shared small buffer. The vast majority of real-world values can be stored directly in the handle, but we allow arbitrary 32 bit (and soon 64-bit) values to be stored, where the handle then becomes an index into the styles buffer.
This results in a real-world memory usage win, while also letting us store the 64-bit values we are wanting to use for math function support (compared to doubling the storage requirements).
This does seem to make style reads slower, which due to their heavy frequency, does have a performance impact observable by synthetics. In an example laying out a tree of 10,000 nodes, we originally read from `StyleValuePool` 2.4 million times.
This originally resulted in a ~10% regression, but when combined with the changes in the last diff, most style reads become simple bitwise operations on the handle, and we are actually 14% faster than before.
| | Before | After | Δ |
| `sizeof(yoga::Style)` | 208B | 144B | -64B/-31% |
| `sizeof(yoga::Node)` | 640B | 576B | -64B/-10% |
| `sizeof(YogaLayoutableShadowNode) ` | 920B | 856B | -64B/-7% |
| `sizeof(YogaLayoutableShadowNode) + sizeof(YogaStylableProps)` | 1296B | 1168B | -128B/-10% |
| `sizeof(ViewShadowNode)` | 920B | 856B | -64B/-7% |
| `sizeof(ViewShadowNode) + sizeof(ViewShadowNodeProps)` | 2000B | 1872B | -128B/-6% |
| "Huge nested layout" microbenchmark (M1 Ultra) | 11.5ms | 9.9ms | -1.6ms/-14% |
| Quest Store C++ heap usage (avg over 10 runs) | 86.2MB | 84.9MB | -1.3MB/-1.5% |
Reviewed By: joevilches
Differential Revision: D52223122
fbshipit-source-id: 990f4b7e991e8e22d198ce20f7da66d9c6ba637b
2024-01-19 18:22:29 -08:00
|
|
|
void store(StyleValueHandle& handle, FloatOptional number) {
|
|
|
|
if (number.isUndefined()) {
|
|
|
|
handle.setType(StyleValueHandle::Type::Undefined);
|
|
|
|
} else {
|
|
|
|
storeValue(handle, number.unwrap(), StyleValueHandle::Type::Number);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
StyleLength getLength(StyleValueHandle handle) const {
|
|
|
|
if (handle.isUndefined()) {
|
2024-10-09 19:20:15 -07:00
|
|
|
return StyleLength::undefined();
|
Replace CompactValue with StyleValueHandle and StyleValuePool (#1534)
Summary:
X-link: https://github.com/facebook/react-native/pull/42131
Pull Request resolved: https://github.com/facebook/yoga/pull/1534
Now that the storage method is a hidden implementation detail, this changes the underlying data structure used to store styles, from `CompactValue` (a customized 32-bit float with tag bits), to `StyleValuePool`.
This new structure operates on 16-bit handles, and a shared small buffer. The vast majority of real-world values can be stored directly in the handle, but we allow arbitrary 32 bit (and soon 64-bit) values to be stored, where the handle then becomes an index into the styles buffer.
This results in a real-world memory usage win, while also letting us store the 64-bit values we are wanting to use for math function support (compared to doubling the storage requirements).
This does seem to make style reads slower, which due to their heavy frequency, does have a performance impact observable by synthetics. In an example laying out a tree of 10,000 nodes, we originally read from `StyleValuePool` 2.4 million times.
This originally resulted in a ~10% regression, but when combined with the changes in the last diff, most style reads become simple bitwise operations on the handle, and we are actually 14% faster than before.
| | Before | After | Δ |
| `sizeof(yoga::Style)` | 208B | 144B | -64B/-31% |
| `sizeof(yoga::Node)` | 640B | 576B | -64B/-10% |
| `sizeof(YogaLayoutableShadowNode) ` | 920B | 856B | -64B/-7% |
| `sizeof(YogaLayoutableShadowNode) + sizeof(YogaStylableProps)` | 1296B | 1168B | -128B/-10% |
| `sizeof(ViewShadowNode)` | 920B | 856B | -64B/-7% |
| `sizeof(ViewShadowNode) + sizeof(ViewShadowNodeProps)` | 2000B | 1872B | -128B/-6% |
| "Huge nested layout" microbenchmark (M1 Ultra) | 11.5ms | 9.9ms | -1.6ms/-14% |
| Quest Store C++ heap usage (avg over 10 runs) | 86.2MB | 84.9MB | -1.3MB/-1.5% |
Reviewed By: joevilches
Differential Revision: D52223122
fbshipit-source-id: 990f4b7e991e8e22d198ce20f7da66d9c6ba637b
2024-01-19 18:22:29 -08:00
|
|
|
} else if (handle.isAuto()) {
|
2024-10-09 19:20:15 -07:00
|
|
|
return StyleLength::ofAuto();
|
Replace CompactValue with StyleValueHandle and StyleValuePool (#1534)
Summary:
X-link: https://github.com/facebook/react-native/pull/42131
Pull Request resolved: https://github.com/facebook/yoga/pull/1534
Now that the storage method is a hidden implementation detail, this changes the underlying data structure used to store styles, from `CompactValue` (a customized 32-bit float with tag bits), to `StyleValuePool`.
This new structure operates on 16-bit handles, and a shared small buffer. The vast majority of real-world values can be stored directly in the handle, but we allow arbitrary 32 bit (and soon 64-bit) values to be stored, where the handle then becomes an index into the styles buffer.
This results in a real-world memory usage win, while also letting us store the 64-bit values we are wanting to use for math function support (compared to doubling the storage requirements).
This does seem to make style reads slower, which due to their heavy frequency, does have a performance impact observable by synthetics. In an example laying out a tree of 10,000 nodes, we originally read from `StyleValuePool` 2.4 million times.
This originally resulted in a ~10% regression, but when combined with the changes in the last diff, most style reads become simple bitwise operations on the handle, and we are actually 14% faster than before.
| | Before | After | Δ |
| `sizeof(yoga::Style)` | 208B | 144B | -64B/-31% |
| `sizeof(yoga::Node)` | 640B | 576B | -64B/-10% |
| `sizeof(YogaLayoutableShadowNode) ` | 920B | 856B | -64B/-7% |
| `sizeof(YogaLayoutableShadowNode) + sizeof(YogaStylableProps)` | 1296B | 1168B | -128B/-10% |
| `sizeof(ViewShadowNode)` | 920B | 856B | -64B/-7% |
| `sizeof(ViewShadowNode) + sizeof(ViewShadowNodeProps)` | 2000B | 1872B | -128B/-6% |
| "Huge nested layout" microbenchmark (M1 Ultra) | 11.5ms | 9.9ms | -1.6ms/-14% |
| Quest Store C++ heap usage (avg over 10 runs) | 86.2MB | 84.9MB | -1.3MB/-1.5% |
Reviewed By: joevilches
Differential Revision: D52223122
fbshipit-source-id: 990f4b7e991e8e22d198ce20f7da66d9c6ba637b
2024-01-19 18:22:29 -08:00
|
|
|
} else {
|
|
|
|
assert(
|
|
|
|
handle.type() == StyleValueHandle::Type::Point ||
|
|
|
|
handle.type() == StyleValueHandle::Type::Percent);
|
|
|
|
float value = (handle.isValueIndexed())
|
|
|
|
? std::bit_cast<float>(buffer_.get32(handle.value()))
|
|
|
|
: unpackInlineInteger(handle.value());
|
|
|
|
|
|
|
|
return handle.type() == StyleValueHandle::Type::Point
|
2024-10-09 19:20:15 -07:00
|
|
|
? StyleLength::points(value)
|
|
|
|
: StyleLength::percent(value);
|
Replace CompactValue with StyleValueHandle and StyleValuePool (#1534)
Summary:
X-link: https://github.com/facebook/react-native/pull/42131
Pull Request resolved: https://github.com/facebook/yoga/pull/1534
Now that the storage method is a hidden implementation detail, this changes the underlying data structure used to store styles, from `CompactValue` (a customized 32-bit float with tag bits), to `StyleValuePool`.
This new structure operates on 16-bit handles, and a shared small buffer. The vast majority of real-world values can be stored directly in the handle, but we allow arbitrary 32 bit (and soon 64-bit) values to be stored, where the handle then becomes an index into the styles buffer.
This results in a real-world memory usage win, while also letting us store the 64-bit values we are wanting to use for math function support (compared to doubling the storage requirements).
This does seem to make style reads slower, which due to their heavy frequency, does have a performance impact observable by synthetics. In an example laying out a tree of 10,000 nodes, we originally read from `StyleValuePool` 2.4 million times.
This originally resulted in a ~10% regression, but when combined with the changes in the last diff, most style reads become simple bitwise operations on the handle, and we are actually 14% faster than before.
| | Before | After | Δ |
| `sizeof(yoga::Style)` | 208B | 144B | -64B/-31% |
| `sizeof(yoga::Node)` | 640B | 576B | -64B/-10% |
| `sizeof(YogaLayoutableShadowNode) ` | 920B | 856B | -64B/-7% |
| `sizeof(YogaLayoutableShadowNode) + sizeof(YogaStylableProps)` | 1296B | 1168B | -128B/-10% |
| `sizeof(ViewShadowNode)` | 920B | 856B | -64B/-7% |
| `sizeof(ViewShadowNode) + sizeof(ViewShadowNodeProps)` | 2000B | 1872B | -128B/-6% |
| "Huge nested layout" microbenchmark (M1 Ultra) | 11.5ms | 9.9ms | -1.6ms/-14% |
| Quest Store C++ heap usage (avg over 10 runs) | 86.2MB | 84.9MB | -1.3MB/-1.5% |
Reviewed By: joevilches
Differential Revision: D52223122
fbshipit-source-id: 990f4b7e991e8e22d198ce20f7da66d9c6ba637b
2024-01-19 18:22:29 -08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2024-12-02 17:29:49 -08:00
|
|
|
StyleSizeLength getSize(StyleValueHandle handle) const {
|
|
|
|
if (handle.isUndefined()) {
|
|
|
|
return StyleSizeLength::undefined();
|
|
|
|
} else if (handle.isAuto()) {
|
|
|
|
return StyleSizeLength::ofAuto();
|
|
|
|
} else if (handle.isKeyword(StyleValueHandle::Keyword::MaxContent)) {
|
|
|
|
return StyleSizeLength::ofMaxContent();
|
|
|
|
} else if (handle.isKeyword(StyleValueHandle::Keyword::FitContent)) {
|
|
|
|
return StyleSizeLength::ofFitContent();
|
|
|
|
} else if (handle.isKeyword(StyleValueHandle::Keyword::Stretch)) {
|
|
|
|
return StyleSizeLength::ofStretch();
|
|
|
|
} else {
|
|
|
|
assert(
|
|
|
|
handle.type() == StyleValueHandle::Type::Point ||
|
|
|
|
handle.type() == StyleValueHandle::Type::Percent);
|
|
|
|
float value = (handle.isValueIndexed())
|
|
|
|
? std::bit_cast<float>(buffer_.get32(handle.value()))
|
|
|
|
: unpackInlineInteger(handle.value());
|
|
|
|
|
|
|
|
return handle.type() == StyleValueHandle::Type::Point
|
|
|
|
? StyleSizeLength::points(value)
|
|
|
|
: StyleSizeLength::percent(value);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Replace CompactValue with StyleValueHandle and StyleValuePool (#1534)
Summary:
X-link: https://github.com/facebook/react-native/pull/42131
Pull Request resolved: https://github.com/facebook/yoga/pull/1534
Now that the storage method is a hidden implementation detail, this changes the underlying data structure used to store styles, from `CompactValue` (a customized 32-bit float with tag bits), to `StyleValuePool`.
This new structure operates on 16-bit handles, and a shared small buffer. The vast majority of real-world values can be stored directly in the handle, but we allow arbitrary 32 bit (and soon 64-bit) values to be stored, where the handle then becomes an index into the styles buffer.
This results in a real-world memory usage win, while also letting us store the 64-bit values we are wanting to use for math function support (compared to doubling the storage requirements).
This does seem to make style reads slower, which due to their heavy frequency, does have a performance impact observable by synthetics. In an example laying out a tree of 10,000 nodes, we originally read from `StyleValuePool` 2.4 million times.
This originally resulted in a ~10% regression, but when combined with the changes in the last diff, most style reads become simple bitwise operations on the handle, and we are actually 14% faster than before.
| | Before | After | Δ |
| `sizeof(yoga::Style)` | 208B | 144B | -64B/-31% |
| `sizeof(yoga::Node)` | 640B | 576B | -64B/-10% |
| `sizeof(YogaLayoutableShadowNode) ` | 920B | 856B | -64B/-7% |
| `sizeof(YogaLayoutableShadowNode) + sizeof(YogaStylableProps)` | 1296B | 1168B | -128B/-10% |
| `sizeof(ViewShadowNode)` | 920B | 856B | -64B/-7% |
| `sizeof(ViewShadowNode) + sizeof(ViewShadowNodeProps)` | 2000B | 1872B | -128B/-6% |
| "Huge nested layout" microbenchmark (M1 Ultra) | 11.5ms | 9.9ms | -1.6ms/-14% |
| Quest Store C++ heap usage (avg over 10 runs) | 86.2MB | 84.9MB | -1.3MB/-1.5% |
Reviewed By: joevilches
Differential Revision: D52223122
fbshipit-source-id: 990f4b7e991e8e22d198ce20f7da66d9c6ba637b
2024-01-19 18:22:29 -08:00
|
|
|
FloatOptional getNumber(StyleValueHandle handle) const {
|
|
|
|
if (handle.isUndefined()) {
|
|
|
|
return FloatOptional{};
|
|
|
|
} else {
|
|
|
|
assert(handle.type() == StyleValueHandle::Type::Number);
|
|
|
|
float value = (handle.isValueIndexed())
|
|
|
|
? std::bit_cast<float>(buffer_.get32(handle.value()))
|
|
|
|
: unpackInlineInteger(handle.value());
|
|
|
|
return FloatOptional{value};
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
private:
|
|
|
|
void storeValue(
|
|
|
|
StyleValueHandle& handle,
|
|
|
|
float value,
|
|
|
|
StyleValueHandle::Type type) {
|
|
|
|
handle.setType(type);
|
|
|
|
|
|
|
|
if (handle.isValueIndexed()) {
|
|
|
|
auto newIndex =
|
|
|
|
buffer_.replace(handle.value(), std::bit_cast<uint32_t>(value));
|
|
|
|
handle.setValue(newIndex);
|
|
|
|
} else if (isIntegerPackable(value)) {
|
|
|
|
handle.setValue(packInlineInteger(value));
|
|
|
|
} else {
|
|
|
|
auto newIndex = buffer_.push(std::bit_cast<uint32_t>(value));
|
|
|
|
handle.setValue(newIndex);
|
|
|
|
handle.setValueIsIndexed();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2024-12-02 17:29:49 -08:00
|
|
|
void storeKeyword(
|
|
|
|
StyleValueHandle& handle,
|
|
|
|
StyleValueHandle::Keyword keyword) {
|
|
|
|
handle.setType(StyleValueHandle::Type::Keyword);
|
|
|
|
|
|
|
|
if (handle.isValueIndexed()) {
|
|
|
|
auto newIndex =
|
|
|
|
buffer_.replace(handle.value(), static_cast<uint32_t>(keyword));
|
|
|
|
handle.setValue(newIndex);
|
|
|
|
} else {
|
|
|
|
handle.setValue(static_cast<uint16_t>(keyword));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Replace CompactValue with StyleValueHandle and StyleValuePool (#1534)
Summary:
X-link: https://github.com/facebook/react-native/pull/42131
Pull Request resolved: https://github.com/facebook/yoga/pull/1534
Now that the storage method is a hidden implementation detail, this changes the underlying data structure used to store styles, from `CompactValue` (a customized 32-bit float with tag bits), to `StyleValuePool`.
This new structure operates on 16-bit handles, and a shared small buffer. The vast majority of real-world values can be stored directly in the handle, but we allow arbitrary 32 bit (and soon 64-bit) values to be stored, where the handle then becomes an index into the styles buffer.
This results in a real-world memory usage win, while also letting us store the 64-bit values we are wanting to use for math function support (compared to doubling the storage requirements).
This does seem to make style reads slower, which due to their heavy frequency, does have a performance impact observable by synthetics. In an example laying out a tree of 10,000 nodes, we originally read from `StyleValuePool` 2.4 million times.
This originally resulted in a ~10% regression, but when combined with the changes in the last diff, most style reads become simple bitwise operations on the handle, and we are actually 14% faster than before.
| | Before | After | Δ |
| `sizeof(yoga::Style)` | 208B | 144B | -64B/-31% |
| `sizeof(yoga::Node)` | 640B | 576B | -64B/-10% |
| `sizeof(YogaLayoutableShadowNode) ` | 920B | 856B | -64B/-7% |
| `sizeof(YogaLayoutableShadowNode) + sizeof(YogaStylableProps)` | 1296B | 1168B | -128B/-10% |
| `sizeof(ViewShadowNode)` | 920B | 856B | -64B/-7% |
| `sizeof(ViewShadowNode) + sizeof(ViewShadowNodeProps)` | 2000B | 1872B | -128B/-6% |
| "Huge nested layout" microbenchmark (M1 Ultra) | 11.5ms | 9.9ms | -1.6ms/-14% |
| Quest Store C++ heap usage (avg over 10 runs) | 86.2MB | 84.9MB | -1.3MB/-1.5% |
Reviewed By: joevilches
Differential Revision: D52223122
fbshipit-source-id: 990f4b7e991e8e22d198ce20f7da66d9c6ba637b
2024-01-19 18:22:29 -08:00
|
|
|
static constexpr bool isIntegerPackable(float f) {
|
|
|
|
constexpr uint16_t kMaxInlineAbsValue = (1 << 11) - 1;
|
|
|
|
|
2024-03-04 02:28:02 -08:00
|
|
|
auto i = static_cast<int32_t>(f);
|
Replace CompactValue with StyleValueHandle and StyleValuePool (#1534)
Summary:
X-link: https://github.com/facebook/react-native/pull/42131
Pull Request resolved: https://github.com/facebook/yoga/pull/1534
Now that the storage method is a hidden implementation detail, this changes the underlying data structure used to store styles, from `CompactValue` (a customized 32-bit float with tag bits), to `StyleValuePool`.
This new structure operates on 16-bit handles, and a shared small buffer. The vast majority of real-world values can be stored directly in the handle, but we allow arbitrary 32 bit (and soon 64-bit) values to be stored, where the handle then becomes an index into the styles buffer.
This results in a real-world memory usage win, while also letting us store the 64-bit values we are wanting to use for math function support (compared to doubling the storage requirements).
This does seem to make style reads slower, which due to their heavy frequency, does have a performance impact observable by synthetics. In an example laying out a tree of 10,000 nodes, we originally read from `StyleValuePool` 2.4 million times.
This originally resulted in a ~10% regression, but when combined with the changes in the last diff, most style reads become simple bitwise operations on the handle, and we are actually 14% faster than before.
| | Before | After | Δ |
| `sizeof(yoga::Style)` | 208B | 144B | -64B/-31% |
| `sizeof(yoga::Node)` | 640B | 576B | -64B/-10% |
| `sizeof(YogaLayoutableShadowNode) ` | 920B | 856B | -64B/-7% |
| `sizeof(YogaLayoutableShadowNode) + sizeof(YogaStylableProps)` | 1296B | 1168B | -128B/-10% |
| `sizeof(ViewShadowNode)` | 920B | 856B | -64B/-7% |
| `sizeof(ViewShadowNode) + sizeof(ViewShadowNodeProps)` | 2000B | 1872B | -128B/-6% |
| "Huge nested layout" microbenchmark (M1 Ultra) | 11.5ms | 9.9ms | -1.6ms/-14% |
| Quest Store C++ heap usage (avg over 10 runs) | 86.2MB | 84.9MB | -1.3MB/-1.5% |
Reviewed By: joevilches
Differential Revision: D52223122
fbshipit-source-id: 990f4b7e991e8e22d198ce20f7da66d9c6ba637b
2024-01-19 18:22:29 -08:00
|
|
|
return static_cast<float>(i) == f && i >= -kMaxInlineAbsValue &&
|
|
|
|
i <= +kMaxInlineAbsValue;
|
|
|
|
}
|
|
|
|
|
|
|
|
static constexpr uint16_t packInlineInteger(float value) {
|
|
|
|
uint16_t isNegative = value < 0 ? 1 : 0;
|
|
|
|
return static_cast<uint16_t>(
|
|
|
|
(isNegative << 11) |
|
2024-03-04 02:28:02 -08:00
|
|
|
(static_cast<int32_t>(value) * (isNegative != 0u ? -1 : 1)));
|
Replace CompactValue with StyleValueHandle and StyleValuePool (#1534)
Summary:
X-link: https://github.com/facebook/react-native/pull/42131
Pull Request resolved: https://github.com/facebook/yoga/pull/1534
Now that the storage method is a hidden implementation detail, this changes the underlying data structure used to store styles, from `CompactValue` (a customized 32-bit float with tag bits), to `StyleValuePool`.
This new structure operates on 16-bit handles, and a shared small buffer. The vast majority of real-world values can be stored directly in the handle, but we allow arbitrary 32 bit (and soon 64-bit) values to be stored, where the handle then becomes an index into the styles buffer.
This results in a real-world memory usage win, while also letting us store the 64-bit values we are wanting to use for math function support (compared to doubling the storage requirements).
This does seem to make style reads slower, which due to their heavy frequency, does have a performance impact observable by synthetics. In an example laying out a tree of 10,000 nodes, we originally read from `StyleValuePool` 2.4 million times.
This originally resulted in a ~10% regression, but when combined with the changes in the last diff, most style reads become simple bitwise operations on the handle, and we are actually 14% faster than before.
| | Before | After | Δ |
| `sizeof(yoga::Style)` | 208B | 144B | -64B/-31% |
| `sizeof(yoga::Node)` | 640B | 576B | -64B/-10% |
| `sizeof(YogaLayoutableShadowNode) ` | 920B | 856B | -64B/-7% |
| `sizeof(YogaLayoutableShadowNode) + sizeof(YogaStylableProps)` | 1296B | 1168B | -128B/-10% |
| `sizeof(ViewShadowNode)` | 920B | 856B | -64B/-7% |
| `sizeof(ViewShadowNode) + sizeof(ViewShadowNodeProps)` | 2000B | 1872B | -128B/-6% |
| "Huge nested layout" microbenchmark (M1 Ultra) | 11.5ms | 9.9ms | -1.6ms/-14% |
| Quest Store C++ heap usage (avg over 10 runs) | 86.2MB | 84.9MB | -1.3MB/-1.5% |
Reviewed By: joevilches
Differential Revision: D52223122
fbshipit-source-id: 990f4b7e991e8e22d198ce20f7da66d9c6ba637b
2024-01-19 18:22:29 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
static constexpr float unpackInlineInteger(uint16_t value) {
|
|
|
|
constexpr uint16_t kValueSignMask = 0b0000'1000'0000'0000;
|
|
|
|
constexpr uint16_t kValueMagnitudeMask = 0b0000'0111'1111'1111;
|
|
|
|
const bool isNegative = (value & kValueSignMask) != 0;
|
|
|
|
return static_cast<float>(
|
|
|
|
(value & kValueMagnitudeMask) * (isNegative ? -1 : 1));
|
|
|
|
}
|
|
|
|
|
|
|
|
SmallValueBuffer<4> buffer_;
|
|
|
|
};
|
|
|
|
|
|
|
|
} // namespace facebook::yoga
|