Fix margin auto with smaller parent lead to negative position #1014

Open
woehrl01 wants to merge 1 commits from woehrl01/fix-margin-auto-negative into main
woehrl01 commented 2020-06-09 04:05:08 -07:00 (Migrated from github.com)

This PR fixes #978, where a smaller parent in combination with margin: auto on bigger children lead to a negative position.

Include tests to prevent future regressions.

This PR fixes #978, where a smaller parent in combination with `margin: auto` on bigger children lead to a negative position. Include tests to prevent future regressions.
woehrl01 commented 2020-07-15 07:12:55 -07:00 (Migrated from github.com)

@SidharthGuglani Which concerns from your side are in place to having this merged?

@SidharthGuglani Which concerns from your side are in place to having this merged?
NickGerleman commented 2023-01-13 21:30:23 -08:00 (Migrated from github.com)

@jacobp100 I thought about this one more.

If the surfaces we are wanting to use as a test for "big enough to be representative" already had margin: "auto" broken without someone noticing, we should probably just make this fix.

@jacobp100 I thought about this one more. If the surfaces we are wanting to use as a test for "big enough to be representative" already had margin: "auto" broken without someone noticing, we should probably just make this fix.
jacobp100 commented 2023-01-14 04:47:37 -08:00 (Migrated from github.com)

Yeah, I’d highly doubt anyone was relying on the incorrect behaviour. Would be really good to get this in - I’ve personally not used auto margins in RN since I hit this bug maybe 3 years ago?

Yeah, I’d highly doubt anyone was relying on the incorrect behaviour. Would be really good to get this in - I’ve personally not used auto margins in RN since I hit this bug maybe 3 years ago?
NickGerleman commented 2023-06-11 14:31:23 -07:00 (Migrated from github.com)

Yeah, I’d highly doubt anyone was relying on the incorrect behaviour. Would be really good to get this in - I’ve personally not used auto margins in RN since I hit this bug maybe 3 years ago?

I finally got around to finishing out the production A/B test on the Facebook app, for the https://github.com/facebook/react-native/pull/35635

Turns out in that example, there were tens of millions of users who hit the code to parse the props that no-op, along with code relying on it doing nothing which caused those users to see a 1% drop in an important e-commerce metric for us, signaling users are hitting issues in that UI.

Since OSS is not on Fabric yet, and we fixed this everywhere else, I can just fix the app. But I think I’m going to go back to, as a rule, putting any behavior change behind an errata before we can prove it is not relied on in real world usage. Given enough code, it’s surprising what ends up showing up. https://github.com/facebook/react-native/pull/35635

> Yeah, I’d highly doubt anyone was relying on the incorrect behaviour. Would be really good to get this in - I’ve personally not used auto margins in RN since I hit this bug maybe 3 years ago? I finally got around to finishing out the production A/B test on the Facebook app, for the https://github.com/facebook/react-native/pull/35635 Turns out in that example, there were tens of millions of users who hit the code to parse the props that no-op, along with code relying on it doing nothing which caused those users to see a 1% drop in an important e-commerce metric for us, signaling users are hitting issues in that UI. Since OSS is not on Fabric yet, and we fixed this everywhere else, I can just fix the app. But I think I’m going to go back to, as a rule, putting any behavior change behind an errata before we can prove it is not relied on in real world usage. Given enough code, it’s surprising what ends up showing up. https://github.com/facebook/react-native/pull/35635
NickGerleman commented 2023-06-11 14:35:22 -07:00 (Migrated from github.com)

FYI @nicoburns, I know you had asked before what I meant when I said “run an experiment”, where I think you had interpreted it as running a lot of tests. This is an example of what I meant, where we turn something on for a percentage of users in very large apps with many users, and can measure how often a condition is hit and whether user behavior changes.

The RN team likes to use this tool to understand the safety of shipping changes to real-world users.

FYI @nicoburns, I know you had asked before what I meant when I said “run an experiment”, where I think you had interpreted it as running a lot of tests. This is an example of what I meant, where we turn something on for a percentage of users in very large apps with many users, and can measure how often a condition is hit and whether user behavior changes. The RN team likes to use this tool to understand the safety of shipping changes to real-world users.
nicoburns commented 2023-06-11 16:29:36 -07:00 (Migrated from github.com)

FYI @nicoburns, I know you had asked before what I meant when I said “run an experiment”, where I think you had interpreted it as running a lot of tests. This is an example of what I meant, where we turn something on for a percentage of users in very large apps with many users, and can measure how often a condition is hit and whether user behavior changes.

Hmm... interesting. I think that does give me a better idea of what "run an experiment" means. I think I had imagined these apps having relatively comprehensive e2e tests, and "run an experiment" would consist of running those rather than enabling the code for production users. I wonder if it would be possible to use a similar mechanism to surface future compatibility warnings to the owners of the affected apps, such that the "errata" could be removed after a certain grace period has elapsed?

(although I would still personally take a "freeze and fork" approach to developing a new version of the algorithm which would avoid this problem entirely as existing conservative users can just stick with the frozen version, and after an initial "experimental" period while it catches up with web implementations, the forked version can piggyback on browsers' web compatibility testing).

> FYI @nicoburns, I know you had asked before what I meant when I said “run an experiment”, where I think you had interpreted it as running a lot of tests. This is an example of what I meant, where we turn something on for a percentage of users in very large apps with many users, and can measure how often a condition is hit and whether user behavior changes. Hmm... interesting. I think that does give me a better idea of what "run an experiment" means. I think I had imagined these apps having relatively comprehensive e2e tests, and "run an experiment" would consist of running those rather than enabling the code for production users. I wonder if it would be possible to use a similar mechanism to surface future compatibility warnings to the owners of the affected apps, such that the "errata" could be removed after a certain grace period has elapsed? (although I would still personally take a "freeze and fork" approach to developing a new version of the algorithm which would avoid this problem entirely as existing conservative users can just stick with the frozen version, and after an initial "experimental" period while it catches up with web implementations, the forked version can piggyback on browsers' web compatibility testing).
NickGerleman commented 2023-06-11 19:17:56 -07:00 (Migrated from github.com)

mechanism to surface future compatibility warnings to the owners of the affected apps, such that the "errata" could be removed after a certain grace period has elapsed?

That’s close to the plan. I have many more details in https://github.com/facebook/yoga/issues/1247

We’re not at that point yet where we can provide good warnings and get people to change their code. But we can guess whether an errata might be safe to remove based on the feedback of how often they are relied on.

> mechanism to surface future compatibility warnings to the owners of the affected apps, such that the "errata" could be removed after a certain grace period has elapsed? That’s close to the plan. I have many more details in https://github.com/facebook/yoga/issues/1247 We’re not at that point yet where we can provide good warnings and get people to change their code. But we can guess whether an errata might be safe to remove based on the feedback of how often they are relied on.
This pull request has changes conflicting with the target branch.
  • gentest/fixtures/YGMarginTest.html
  • tests/YGMarginTest.cpp
  • yoga/Yoga.cpp
View command line instructions

Checkout

From your project repository, check out a new branch and test the changes.
git fetch -u origin woehrl01/fix-margin-auto-negative:woehrl01/fix-margin-auto-negative
git checkout woehrl01/fix-margin-auto-negative
Sign in to join this conversation.
No description provided.