Skip to content

Add a16w8 per-op test for mean_dim (#19594)#19594

Open
christine-long-meta wants to merge 1 commit into
pytorch:mainfrom
christine-long-meta:export-D104532361
Open

Add a16w8 per-op test for mean_dim (#19594)#19594
christine-long-meta wants to merge 1 commit into
pytorch:mainfrom
christine-long-meta:export-D104532361

Conversation

@christine-long-meta
Copy link
Copy Markdown
Contributor

@christine-long-meta christine-long-meta commented May 14, 2026

Summary:

Add int16 activation / int8 weight (a16w8) quantization tests for aten.mean.dim on Ethos-U55 and Ethos-U85.

Changes

  • Add a16w8_mean_test_parameters dict with 11 test configurations covering keepdim/no-keepdim, positive/negative dims, dim=None, and ranks 1-4
  • Add test_mean_dim_a16w8_u55_INT using EthosU55PipelineINT with a16w8_quantization=True, symmetric_io_quantization=True
  • Add test_mean_dim_a16w8_u85_INT using EthosU85PipelineINT with same kwargs
  • Register ops/test_mean_dim.py in fbcode/ and xplat/ targets.bzl

Differential Revision: D104532361

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented May 14, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19594

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@github-actions github-actions Bot added ciflow/trunk module: arm Issues related to arm backend labels May 14, 2026
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented May 14, 2026

Workflows were awaiting approval. CI has now been triggered for the ciflow labels on this PR.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label May 14, 2026
@meta-codesync
Copy link
Copy Markdown
Contributor

meta-codesync Bot commented May 14, 2026

@christine-long-meta has exported this pull request. If you are a Meta employee, you can view the originating Diff in D104532361.

@github-actions
Copy link
Copy Markdown

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@meta-codesync meta-codesync Bot changed the title Add a16w8 per-op test for mean_dim Add a16w8 per-op test for mean_dim (#19594) May 14, 2026
@christine-long-meta christine-long-meta force-pushed the export-D104532361 branch 2 times, most recently from 744487a to b79321f Compare May 14, 2026 16:47
christine-long-meta added a commit to christine-long-meta/executorch that referenced this pull request May 14, 2026
Summary:

Add int16 activation / int8 weight (a16w8) quantization tests for `aten.mean.dim` on Ethos-U55 and Ethos-U85.

## Changes
- Add `a16w8_mean_test_parameters` dict with 11 test configurations covering keepdim/no-keepdim, positive/negative dims, dim=None, and ranks 1-4
- Add `test_mean_dim_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16`
- Add `test_mean_dim_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs
- Register `ops/test_mean_dim.py` in `fbcode/` and `xplat/` `targets.bzl`

bypass-pytorch-oss-checks

Differential Revision: D104532361
@christine-long-meta christine-long-meta force-pushed the export-D104532361 branch 3 times, most recently from e7d90e2 to 37c98c1 Compare May 14, 2026 16:48
christine-long-meta added a commit to christine-long-meta/executorch that referenced this pull request May 14, 2026
Summary:

Add int16 activation / int8 weight (a16w8) quantization tests for `aten.mean.dim` on Ethos-U55 and Ethos-U85.

## Changes
- Add `a16w8_mean_test_parameters` dict with 11 test configurations covering keepdim/no-keepdim, positive/negative dims, dim=None, and ranks 1-4
- Add `test_mean_dim_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16`
- Add `test_mean_dim_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs
- Register `ops/test_mean_dim.py` in `fbcode/` and `xplat/` `targets.bzl`

bypass-pytorch-oss-checks

Differential Revision: D104532361
@christine-long-meta christine-long-meta force-pushed the export-D104532361 branch 2 times, most recently from b79321f to 3350584 Compare May 14, 2026 16:49
@meta-codesync
Copy link
Copy Markdown
Contributor

meta-codesync Bot commented May 14, 2026

@christine-long-meta has exported this pull request. If you are a Meta employee, you can view the originating Diff in D104532361.

christine-long-meta added a commit to christine-long-meta/executorch that referenced this pull request May 14, 2026
Summary:
Pull Request resolved: pytorch#19594

Add int16 activation / int8 weight (a16w8) quantization tests for `aten.mean.dim` on Ethos-U55 and Ethos-U85.

## Changes
- Add `a16w8_mean_test_parameters` dict with 11 test configurations covering keepdim/no-keepdim, positive/negative dims, dim=None, and ranks 1-4
- Add `test_mean_dim_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16`
- Add `test_mean_dim_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs
- Register `ops/test_mean_dim.py` in `fbcode/` and `xplat/` `targets.bzl`

bypass-pytorch-oss-checks

Differential Revision: D104532361
@christine-long-meta christine-long-meta force-pushed the export-D104532361 branch 4 times, most recently from 7bb5c56 to d8dea29 Compare May 14, 2026 16:53
@meta-codesync meta-codesync Bot changed the title Add a16w8 per-op test for mean_dim (#19594) Add a16w8 per-op test for mean_dim May 14, 2026
christine-long-meta added a commit to christine-long-meta/executorch that referenced this pull request May 14, 2026
Summary:

Add int16 activation / int8 weight (a16w8) quantization tests for `aten.mean.dim` on Ethos-U55 and Ethos-U85.

## Changes
- Add `a16w8_mean_test_parameters` dict with 11 test configurations covering keepdim/no-keepdim, positive/negative dims, dim=None, and ranks 1-4
- Add `test_mean_dim_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16`
- Add `test_mean_dim_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs
- Register `ops/test_mean_dim.py` in `fbcode/` and `xplat/` `targets.bzl`

bypass-pytorch-oss-checks

Differential Revision: D104532361
@meta-codesync meta-codesync Bot changed the title Add a16w8 per-op test for mean_dim Add a16w8 per-op test for mean_dim (#19594) May 14, 2026
christine-long-meta added a commit to christine-long-meta/executorch that referenced this pull request May 14, 2026
Summary:
Pull Request resolved: pytorch#19594

Add int16 activation / int8 weight (a16w8) quantization tests for `aten.mean.dim` on Ethos-U55 and Ethos-U85.

## Changes
- Add `a16w8_mean_test_parameters` dict with 11 test configurations covering keepdim/no-keepdim, positive/negative dims, dim=None, and ranks 1-4
- Add `test_mean_dim_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16`
- Add `test_mean_dim_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs
- Register `ops/test_mean_dim.py` in `fbcode/` and `xplat/` `targets.bzl`

bypass-pytorch-oss-checks

Differential Revision: D104532361
Copy link
Copy Markdown
Collaborator

@zingo zingo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@zingo
Copy link
Copy Markdown
Collaborator

zingo commented May 15, 2026

I spot some test fails on u85 that needs to behnadled before merge to not break CI

FAILED backends/arm/test/ops/test_mean_dim.py::test_mean_dim_a16w8_u85_INT[rank_1_keepdim] - AssertionError: Output 0 does not match reference output.
	Given atol: 0.002953125, rtol: 0.001.
	Output tensor shape: torch.Size([1]), dtype: torch.float32
	Difference: max: -0.416595458984375, abs: 0.416595458984375, mean abs error: 0.416595458984375.
	-- Model vs. Reference --
	 Numel: 1, 1
	Median: 0.0, 0.416595458984375
	  Mean: 0.0, 0.416595458984375
	   Max: 0.0, 0.416595458984375
	   Min: 0.0, 0.416595458984375
FAILED backends/arm/test/ops/test_mean_dim.py::test_mean_dim_a16w8_u85_INT[rand_1_keepdim] - AssertionError: Output 0 does not match reference output.
	Given atol: 0.003650776645168662, rtol: 0.001.
	Output tensor shape: torch.Size([1, 1, 7, 3]), dtype: torch.float32
	Difference: max: -0.2187926173210144, abs: 0.6785573959350586, mean abs error: 0.48883259509290966.
	-- Model vs. Reference --
	 Numel: 21, 21
	Median: 0.0, 0.5271524786949158
	  Mean: 0.0, 0.48883259509290966
	   Max: 0.0, 0.6785573959350586
	   Min: 0.0, 0.2187926173210144
FAILED backends/arm/test/ops/test_mean_dim.py::test_mean_dim_a16w8_u85_INT[rank_1] - AssertionError: Output 0 does not match reference output.
	Given atol: 0.002953125, rtol: 0.001.
	Output tensor shape: torch.Size([]), dtype: torch.float32
	Difference: max: -0.416595458984375, abs: 0.416595458984375, mean abs error: 0.416595458984375.
	-- Model vs. Reference --
	 Numel: 1, 1
	Median: 0.0, 0.416595458984375
	  Mean: 0.0, 0.416595458984375
	   Max: 0.0, 0.416595458984375
	   Min: 0.0, 0.416595458984375
FAILED backends/arm/test/ops/test_mean_dim.py::test_mean_dim_a16w8_u85_INT[rand_3] - AssertionError: Output 0 does not match reference output.
	Given atol: 0.004313722088932991, rtol: 0.001.
	Output tensor shape: torch.Size([1, 5, 7]), dtype: torch.float32
	Difference: max: -0.15074846148490906, abs: 0.8482610583305359, mean abs error: 0.48883466039385115.
	-- Model vs. Reference --
	 Numel: 35, 35
	Median: 0.0, 0.50409996509552
	  Mean: 0.0, 0.48883466039385115
	   Max: 0.0, 0.8482610583305359
	   Min: 0.0, 0.15074846148490906

@christine-long-meta
Copy link
Copy Markdown
Contributor Author

I spot some test fails on u85 that needs to behnadled before merge to not break CI

FAILED backends/arm/test/ops/test_mean_dim.py::test_mean_dim_a16w8_u85_INT[rank_1_keepdim] - AssertionError: Output 0 does not match reference output.
	Given atol: 0.002953125, rtol: 0.001.
	Output tensor shape: torch.Size([1]), dtype: torch.float32
	Difference: max: -0.416595458984375, abs: 0.416595458984375, mean abs error: 0.416595458984375.
	-- Model vs. Reference --
	 Numel: 1, 1
	Median: 0.0, 0.416595458984375
	  Mean: 0.0, 0.416595458984375
	   Max: 0.0, 0.416595458984375
	   Min: 0.0, 0.416595458984375
FAILED backends/arm/test/ops/test_mean_dim.py::test_mean_dim_a16w8_u85_INT[rand_1_keepdim] - AssertionError: Output 0 does not match reference output.
	Given atol: 0.003650776645168662, rtol: 0.001.
	Output tensor shape: torch.Size([1, 1, 7, 3]), dtype: torch.float32
	Difference: max: -0.2187926173210144, abs: 0.6785573959350586, mean abs error: 0.48883259509290966.
	-- Model vs. Reference --
	 Numel: 21, 21
	Median: 0.0, 0.5271524786949158
	  Mean: 0.0, 0.48883259509290966
	   Max: 0.0, 0.6785573959350586
	   Min: 0.0, 0.2187926173210144
FAILED backends/arm/test/ops/test_mean_dim.py::test_mean_dim_a16w8_u85_INT[rank_1] - AssertionError: Output 0 does not match reference output.
	Given atol: 0.002953125, rtol: 0.001.
	Output tensor shape: torch.Size([]), dtype: torch.float32
	Difference: max: -0.416595458984375, abs: 0.416595458984375, mean abs error: 0.416595458984375.
	-- Model vs. Reference --
	 Numel: 1, 1
	Median: 0.0, 0.416595458984375
	  Mean: 0.0, 0.416595458984375
	   Max: 0.0, 0.416595458984375
	   Min: 0.0, 0.416595458984375
FAILED backends/arm/test/ops/test_mean_dim.py::test_mean_dim_a16w8_u85_INT[rand_3] - AssertionError: Output 0 does not match reference output.
	Given atol: 0.004313722088932991, rtol: 0.001.
	Output tensor shape: torch.Size([1, 5, 7]), dtype: torch.float32
	Difference: max: -0.15074846148490906, abs: 0.8482610583305359, mean abs error: 0.48883466039385115.
	-- Model vs. Reference --
	 Numel: 35, 35
	Median: 0.0, 0.50409996509552
	  Mean: 0.0, 0.48883466039385115
	   Max: 0.0, 0.8482610583305359
	   Min: 0.0, 0.15074846148490906

@zingo This is a good catch. I added test_mean_dim_a16w8_u85_INT_xfail for the sake of this

@meta-codesync meta-codesync Bot changed the title Add a16w8 per-op test for mean_dim (#19594) Add a16w8 per-op test for mean_dim May 15, 2026
christine-long-meta added a commit to christine-long-meta/executorch that referenced this pull request May 15, 2026
Summary:
Pull Request resolved: pytorch#19594

Add int16 activation / int8 weight (a16w8) quantization tests for `aten.mean.dim` on Ethos-U55 and Ethos-U85.

## Changes
- Add `a16w8_mean_test_parameters` dict with 11 test configurations covering keepdim/no-keepdim, positive/negative dims, dim=None, and ranks 1-4
- Add `test_mean_dim_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True`
- Add `test_mean_dim_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs
- Register `ops/test_mean_dim.py` in `fbcode/` and `xplat/` `targets.bzl`

Differential Revision: D104532361
@meta-codesync meta-codesync Bot changed the title Add a16w8 per-op test for mean_dim Add a16w8 per-op test for mean_dim (#19594) May 15, 2026
christine-long-meta added a commit to christine-long-meta/executorch that referenced this pull request May 15, 2026
Summary:
Pull Request resolved: pytorch#19594

Add int16 activation / int8 weight (a16w8) quantization tests for `aten.mean.dim` on Ethos-U55 and Ethos-U85.

## Changes
- Add `a16w8_mean_test_parameters` dict with 11 test configurations covering keepdim/no-keepdim, positive/negative dims, dim=None, and ranks 1-4
- Add `test_mean_dim_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True`
- Add `test_mean_dim_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs
- Register `ops/test_mean_dim.py` in `fbcode/` and `xplat/` `targets.bzl`

Differential Revision: D104532361
Summary:
Pull Request resolved: pytorch#19594

Add int16 activation / int8 weight (a16w8) quantization tests for `aten.mean.dim` on Ethos-U55 and Ethos-U85.

## Changes
- Add `a16w8_mean_test_parameters` dict with 11 test configurations covering keepdim/no-keepdim, positive/negative dims, dim=None, and ranks 1-4
- Add `test_mean_dim_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True`
- Add `test_mean_dim_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs
- Register `ops/test_mean_dim.py` in `fbcode/` and `xplat/` `targets.bzl`

Differential Revision: D104532361
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported module: arm Issues related to arm backend

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants