Skip to content

Conversation

ntBre
Copy link
Contributor

@ntBre ntBre commented Oct 20, 2025

This is a rough draft with a naive fix. We still disagree with Black on some formatting, for example this snippet from our docstring formatting tests:

def by_first_letter_of_column_values(self, col: str) -> list[pl.DataFrame]:
    return [
        self._df.filter(pl.col(col).str.starts_with(c))
        for c in sorted(set(df.select(pl.col(col).str.slice(0, 1)).to_series()))
    ]

Black reuses the parentheses from the sorted call instead of adding new parentheses around the whole thing, which seems preferable.

Black:

def by_first_letter_of_column_values(self, col: str) -> list[pl.DataFrame]:
    return [
        self._df.filter(pl.col(col).str.starts_with(c))
        for c in sorted(
            set(df.select(pl.col(col).str.slice(0, 1)).to_series())
        )
    ]

This PR:

def by_first_letter_of_column_values(self, col: str) -> list[pl.DataFrame]:
    return [
        self._df.filter(pl.col(col).str.starts_with(c))
        for c in (
            sorted(set(df.select(pl.col(col).str.slice(0, 1)).to_series()))
        )
    ]

I can't quite tell if I'm having trouble here because this is tricky to implement in Ruff as Dylan mentioned here, or if I'm still just unfamiliar with the formatter.

Summary

This PR implements the wrap_comprehension_in style added in
psf/black#4699. This wraps in clauses in
comprehensions if they get too long. Using some examples from the upstream
issue, this code:

[a for graph_path_expression in refined_constraint.condition_as_predicate.variables]

[
    a
    for graph_path_expression
    in refined_constraint.condition_as_predicate.variables
]

is currently formatted to:

[
    a
    for graph_path_expression in refined_constraint.condition_as_predicate.variables
]

[
    a
    for graph_path_expression in refined_constraint.condition_as_predicate.variables
]

even if the second line of the comprehension exceeds the configured line length.

In preview, black will now break these lines by parenthesizing the expression
following in:

[
    a
    for graph_path_expression in (
        refined_constraint.condition_as_predicate.variables
    )
]

[
    a
    for graph_path_expression in (
        refined_constraint.condition_as_predicate.variables
    )
]

I actually kind of like the alternative formatting mentioned on the original
Black issue and in our #12870 which would be more like:

[
    a
    for graph_path_expression
	in refined_constraint.condition_as_predicate.variables
]

but I think I'm in the minority there.

Test Plan

Existing Black compatibility tests showing fewer differences

@ntBre ntBre force-pushed the brent/wrap-comprehension-in branch from 8e1b159 to d109f48 Compare October 20, 2025 21:31
Copy link
Contributor

github-actions bot commented Oct 20, 2025

ruff-ecosystem results

Formatter (stable)

✅ ecosystem check detected no format changes.

Formatter (preview)

ℹ️ ecosystem check detected format changes. (+449 -544 lines in 95 files in 24 projects; 31 projects unchanged)

RasaHQ/rasa (+21 -25 lines across 5 files)

ruff format --preview

rasa/core/policies/ted_policy.py~L1009

         entity_tag_specs = [
             EntityTagSpec(
                 tag_name=tag_spec["tag_name"],
-                ids_to_tags={
-                    int(key): value for key, value in tag_spec["ids_to_tags"].items()
-                },
-                tags_to_ids={
-                    key: int(value) for key, value in tag_spec["tags_to_ids"].items()
-                },
+                ids_to_tags={int(key): value for key, value in tag_spec[
+                        "ids_to_tags"
+                    ].items()},
+                tags_to_ids={key: int(value) for key, value in tag_spec[
+                        "tags_to_ids"
+                    ].items()},
                 num_tags=tag_spec["num_tags"],
             )
             for tag_spec in entity_tag_specs

rasa/nlu/classifiers/diet_classifier.py~L1195

         entity_tag_specs = [
             EntityTagSpec(
                 tag_name=tag_spec["tag_name"],
-                ids_to_tags={
-                    int(key): value for key, value in tag_spec["ids_to_tags"].items()
-                },
-                tags_to_ids={
-                    key: int(value) for key, value in tag_spec["tags_to_ids"].items()
-                },
+                ids_to_tags={int(key): value for key, value in tag_spec[
+                        "ids_to_tags"
+                    ].items()},
+                tags_to_ids={key: int(value) for key, value in tag_spec[
+                        "tags_to_ids"
+                    ].items()},
                 num_tags=tag_spec["num_tags"],
             )
             for tag_spec in entity_tag_specs

rasa/shared/core/domain.py~L1672

 
         def get_duplicates(my_items: Iterable[Any]) -> List[Any]:
             """Returns a list of duplicate items in my_items."""
-            return [
-                item
-                for item, count in collections.Counter(my_items).items()
-                if count > 1
-            ]
+            return [item for item, count in collections.Counter(
+                    my_items
+                ).items() if count > 1]
 
         def check_mappings(
             intent_properties: Dict[Text, Dict[Text, Union[bool, List]]],

rasa/shared/core/generator.py~L141

 
     @staticmethod
     def _unfreeze_states(frozen_states: Deque[FrozenState]) -> List[State]:
-        return [
-            {key: dict(value) for key, value in dict(frozen_state).items()}
-            for frozen_state in frozen_states
-        ]
+        return [{key: dict(value) for key, value in dict(
+                    frozen_state
+                ).items()} for frozen_state in frozen_states]
 
     def past_states(
         self,

tests/core/test_broker.py~L163

         actual.publish(e.as_dict())
 
     with actual.session_scope() as session:
-        events_types = [
-            json.loads(event.data)["event"]
-            for event in session.query(actual.SQLBrokerEvent).all()
-        ]
+        events_types = [json.loads(event.data)["event"] for event in session.query(
+                actual.SQLBrokerEvent
+            ).all()]
 
     assert events_types == ["user", "slot", "restart"]
 

Snowflake-Labs/snowcli (+6 -9 lines across 2 files)

ruff format --preview

src/snowflake/cli/_app/snow_connector.py~L123

 
     connection_parameters = {}
     if connection_name:
-        connection_parameters = {
-            _resolve_alias(k): v
-            for k, v in get_connection_dict(connection_name).items()
-        }
+        connection_parameters = {_resolve_alias(k): v for k, v in get_connection_dict(
+                connection_name
+            ).items()}
 
     elif temporary_connection:
         connection_parameters = {}  # we will apply overrides in next step

src/snowflake/cli/api/config.py~L111

         return cls(**known_settings, _other_settings=other_settings)
 
     def to_dict_of_known_non_empty_values(self) -> dict:
-        return {
-            k: v
-            for k, v in asdict(self).items()
-            if k != "_other_settings" and v is not None
-        }
+        return {k: v for k, v in asdict(
+                self
+            ).items() if k != "_other_settings" and v is not None}
 
     def _non_empty_other_values(self) -> dict:
         return {k: v for k, v in self._other_settings.items() if v is not None}

alteryx/featuretools (+9 -11 lines across 2 files)

ruff format --preview

featuretools/feature_base/feature_base.py~L612

         return self._name_from_base(self.base_features[0].get_name())
 
     def generate_names(self):
-        return [
-            self._name_from_base(base_name)
-            for base_name in self.base_features[0].get_feature_names()
-        ]
+        return [self._name_from_base(base_name) for base_name in self.base_features[
+                0
+            ].get_feature_names()]
 
     def get_arguments(self):
         _is_forward, relationship = self.relationship_path[0]

featuretools/synthesis/deep_feature_synthesis.py~L607

                 return True
             return False
 
-        for feat in [
-            f for f in all_features[dataframe.ww.name].values() if is_valid_feature(f)
-        ]:
+        for feat in [f for f in all_features[
+                dataframe.ww.name
+            ].values() if is_valid_feature(f)]:
             # Get interesting_values from the EntitySet that was passed, which
             # is assumed to be the most recent version of the EntitySet.
             # Features can contain a stale EntitySet reference without

featuretools/synthesis/deep_feature_synthesis.py~L921

             return [feature]
 
         # Build the complete list of features prior to processing
-        selected_features = [
-            expand_features(feature)
-            for feature in all_features[dataframe.ww.name].values()
-        ]
+        selected_features = [expand_features(feature) for feature in all_features[
+                dataframe.ww.name
+            ].values()]
         selected_features = functools.reduce(operator.iconcat, selected_features, [])
 
         column_schemas = column_schemas if column_schemas else set()

apache/airflow (+37 -42 lines across 7 files)

ruff format --preview

airflow-core/src/airflow/serialization/serialized_objects.py~L2426

         param_to_attr = {
             "description": "_description",
         }
-        return {
-            param_to_attr.get(k, k): v.default
-            for k, v in signature(DAG.__init__).parameters.items()
-            if v.default is not v.empty
-        }
+        return {param_to_attr.get(k, k): v.default for k, v in signature(
+                DAG.__init__
+            ).parameters.items() if v.default is not v.empty}
 
     _CONSTRUCTOR_PARAMS = __get_constructor_defaults.__func__()  # type: ignore
     del __get_constructor_defaults

airflow-core/tests/unit/always/test_secrets_local_filesystem.py~L432

     def test_yaml_extension_parsers_return_same_result(self, file_content):
         with mock_local_file(file_content):
             conn_uri_by_conn_id_yaml = {
-                conn_id: conn.get_uri()
-                for conn_id, conn in local_filesystem.load_connections_dict("a.yaml").items()
+                conn_id: conn.get_uri() for conn_id, conn in local_filesystem.load_connections_dict(
+                    "a.yaml"
+                ).items()
             }
             conn_uri_by_conn_id_yml = {
-                conn_id: conn.get_uri()
-                for conn_id, conn in local_filesystem.load_connections_dict("a.yml").items()
+                conn_id: conn.get_uri() for conn_id, conn in local_filesystem.load_connections_dict(
+                    "a.yml"
+                ).items()
             }
             assert conn_uri_by_conn_id_yaml == conn_uri_by_conn_id_yml
 

airflow-core/tests/unit/jobs/test_scheduler_job.py~L2206

         assert [x.queued_dttm for x in tis] == [None, None]
 
         _queue_tasks(tis=tis)
-        log_events = [
-            x.event for x in session.scalars(select(Log).where(Log.run_id == run_id).order_by(Log.id)).all()
-        ]
+        log_events = [x.event for x in session.scalars(
+                select(Log).where(Log.run_id == run_id).order_by(Log.id)
+            ).all()]
         assert log_events == [
             "stuck in queued reschedule",
             "stuck in queued reschedule",

airflow-core/tests/unit/jobs/test_scheduler_job.py~L2217

         with _loader_mock(mock_executors):
             scheduler._handle_tasks_stuck_in_queued()
 
-        log_events = [
-            x.event for x in session.scalars(select(Log).where(Log.run_id == run_id).order_by(Log.id)).all()
-        ]
+        log_events = [x.event for x in session.scalars(
+                select(Log).where(Log.run_id == run_id).order_by(Log.id)
+            ).all()]
         assert log_events == [
             "stuck in queued reschedule",
             "stuck in queued reschedule",

airflow-core/tests/unit/jobs/test_scheduler_job.py~L2233

 
         with _loader_mock(mock_executors):
             scheduler._handle_tasks_stuck_in_queued()
-        log_events = [
-            x.event for x in session.scalars(select(Log).where(Log.run_id == run_id).order_by(Log.id)).all()
-        ]
+        log_events = [x.event for x in session.scalars(
+                select(Log).where(Log.run_id == run_id).order_by(Log.id)
+            ).all()]
         assert log_events == [
             "stuck in queued reschedule",
             "stuck in queued reschedule",

airflow-core/tests/unit/jobs/test_scheduler_job.py~L2297

         assert [x.queued_dttm for x in tis] == [None, None]
 
         _queue_tasks(tis=tis)
-        log_events = [
-            x.event for x in session.scalars(select(Log).where(Log.run_id == run_id).order_by(Log.id)).all()
-        ]
+        log_events = [x.event for x in session.scalars(
+                select(Log).where(Log.run_id == run_id).order_by(Log.id)
+            ).all()]
         assert log_events == [
             "stuck in queued reschedule",
             "stuck in queued reschedule",

airflow-core/tests/unit/jobs/test_scheduler_job.py~L2308

         with _loader_mock(mock_executors):
             scheduler._handle_tasks_stuck_in_queued()
 
-        log_events = [
-            x.event for x in session.scalars(select(Log).where(Log.run_id == run_id).order_by(Log.id)).all()
-        ]
+        log_events = [x.event for x in session.scalars(
+                select(Log).where(Log.run_id == run_id).order_by(Log.id)
+            ).all()]
         assert log_events == [
             "stuck in queued reschedule",
             "stuck in queued reschedule",

airflow-core/tests/unit/jobs/test_scheduler_job.py~L2329

                 scheduler._handle_tasks_stuck_in_queued()
             tis = dr.get_task_instances(session=session)
 
-        log_events = [
-            x.event for x in session.scalars(select(Log).where(Log.run_id == run_id).order_by(Log.id)).all()
-        ]
+        log_events = [x.event for x in session.scalars(
+                select(Log).where(Log.run_id == run_id).order_by(Log.id)
+            ).all()]
         assert log_events == [
             "stuck in queued reschedule",
             "stuck in queued reschedule",

airflow-core/tests/unit/models/test_dag.py~L3239

             t1 = my_teardown()
             s1 >> w1 >> t1
             s1 >> t1
-        assert {
-            x.task_id
-            for x in dag.partial_subset(
+        assert {x.task_id for x in dag.partial_subset(
                 "my_setup", include_upstream=upstream, include_downstream=downstream
-            ).tasks
-        } == expected
+            ).tasks} == expected
 
     def test_get_flat_relative_ids_two_tasks_diff_setup_teardowns_deeper(self):
         with DAG(dag_id="test_dag", schedule=None, start_date=pendulum.now()) as dag:

devel-common/src/docs/utils/conf_constants.py~L68

 
 
 def get_rst_epilogue(package_version: str, airflow_core: bool) -> str:
-    return "\n".join(
-        f".. |{key}| replace:: {replace}"
-        for key, replace in get_global_substitutions(package_version, airflow_core).items()
-    )
+    return "\n".join(f".. |{key}| replace:: {replace}" for key, replace in get_global_substitutions(
+            package_version, airflow_core
+        ).items())
 
 
 SMARTQUOTES_EXCLUDES = {"builders": ["man", "text", "spelling"]}

providers/fab/src/airflow/providers/fab/auth_manager/security_manager/override.py~L2415

 
     def _get_all_roles_with_permissions(self) -> dict[str, Role]:
         """Return a dict with a key of role name and value of role with early loaded permissions."""
-        return {
-            r.name: r
-            for r in self.session.scalars(
+        return {r.name: r for r in self.session.scalars(
                 select(self.role_model).options(joinedload(self.role_model.permissions))
-            ).unique()
-        }
+            ).unique()}
 
     def _get_all_non_dag_permissions(self) -> dict[tuple[str, str], Permission]:
         """

providers/google/src/airflow/providers/google/cloud/openlineage/mixins.py~L473

         """Extract column names from a dataset's schema."""
         return [
             f.name
-            for f in dataset.facets.get("schema", SchemaDatasetFacet(fields=[])).fields  # type: ignore[union-attr]
+            for f in (
+                dataset.facets.get("schema", SchemaDatasetFacet(fields=[])).fields  # type: ignore[union-attr]
+            )
             if dataset.facets
         ]
 

apache/superset (+71 -62 lines across 12 files)

ruff format --preview

superset/commands/dashboard/importers/v1/utils.py~L82

         # in filter_scopes the key is the chart ID as a string; we need to update
         # them to be the new ID as a string:
         metadata["filter_scopes"] = {
-            str(id_map[int(old_id)]): columns
-            for old_id, columns in metadata["filter_scopes"].items()
-            if int(old_id) in id_map
+            str(id_map[int(old_id)]): columns for old_id, columns in metadata[
+                "filter_scopes"
+            ].items() if int(old_id) in id_map
         }
 
         # now update columns to use new IDs:

superset/commands/dashboard/importers/v1/utils.py~L98

 
     if "expanded_slices" in metadata:
         metadata["expanded_slices"] = {
-            str(id_map[int(old_id)]): value
-            for old_id, value in metadata["expanded_slices"].items()
+            str(id_map[int(old_id)]): value for old_id, value in metadata[
+                "expanded_slices"
+            ].items()
         }
 
     if "default_filters" in metadata:

superset/common/query_context_processor.py~L148

             try:
                 if invalid_columns := [
                     col
-                    for col in get_column_names_from_columns(query_obj.columns)
-                    + get_column_names_from_metrics(query_obj.metrics or [])
+                    for col in get_column_names_from_columns(
+                        query_obj.columns
+                    ) + get_column_names_from_metrics(query_obj.metrics or [])
                     if (
                         col not in self._qc_datasource.column_names
                         and col != DTTM_ALIAS

superset/daos/tag.py~L318

         ids = [tag.id for tag in tags]
         return [
             star.tag_id
-            for star in db.session.query(user_favorite_tag_table.c.tag_id)
-            .filter(
-                user_favorite_tag_table.c.tag_id.in_(ids),
-                user_favorite_tag_table.c.user_id == get_user_id(),
+            for star in (
+                db.session.query(user_favorite_tag_table.c.tag_id)
+                .filter(
+                    user_favorite_tag_table.c.tag_id.in_(ids),
+                    user_favorite_tag_table.c.user_id == get_user_id(),
+                )
+                .all()
             )
-            .all()
         ]
 
     @staticmethod

superset/migrations/shared/native_filters.py~L264

                                 child["cascadeParentIds"].append(parent["id"])
 
     return sorted(
-        [
-            fltr
-            for key in filter_by_key_and_field
-            for fltr in filter_by_key_and_field[key].values()
-        ],
+        [fltr for key in filter_by_key_and_field for fltr in filter_by_key_and_field[
+                key
+            ].values()],
         key=lambda fltr: fltr["filterType"],
     )
 

superset/models/helpers.py~L214

         """Get all (single column and multi column) unique constraints"""
         unique = [
             {c.name for c in u.columns}
-            for u in cls.__table_args__  # type: ignore
+            for u in (
+                cls.__table_args__  # type: ignore
+            )
             if isinstance(u, UniqueConstraint)
         ]
         unique.extend({c.name} for c in cls.__table__.columns if c.unique)  # type: ignore

superset/models/helpers.py~L251

 
         schema: dict[str, Any] = {
             column.name: formatter(column)
-            for column in cls.__table__.columns  # type: ignore
+            for column in (
+                cls.__table__.columns  # type: ignore
+            )
             if (column.name in cls.export_fields and column.name not in parent_excludes)
         }
         if recursive:

superset/models/helpers.py~L403

                 parent_excludes = {c.name for c in parent_ref.local_columns}
         dict_rep = {
             c.name: getattr(self, c.name)
-            for c in cls.__table__.columns  # type: ignore
+            for c in (
+                cls.__table__.columns  # type: ignore
+            )
             if (
                 c.name in export_fields
                 and c.name not in parent_excludes

superset/models/helpers.py~L694

 
     table = target.__table__
     primary_keys = table.primary_key.columns.keys()
-    data = {
-        attr: getattr(target, attr)
-        for attr in list(table.columns.keys()) + (keep_relations or [])
-        if attr not in primary_keys and attr not in ignore
-    }
+    data = {attr: getattr(target, attr) for attr in list(table.columns.keys()) + (
+            keep_relations or []
+        ) if attr not in primary_keys and attr not in ignore}
     data.update(kwargs)
 
     return target.__class__(**data)

superset/security/manager.py~L633

             and (
                 drillable_columns := {
                     row[0]
-                    for row in self.session.query(TableColumn.column_name)
-                    .filter(TableColumn.table_id == datasource.id)
-                    .filter(TableColumn.groupby)
-                    .all()
+                    for row in (
+                        self.session.query(TableColumn.column_name)
+                        .filter(TableColumn.table_id == datasource.id)
+                        .filter(TableColumn.groupby)
+                        .all()
+                    )
                 }
             )
             and set(dimensions).issubset(drillable_columns)

superset/views/core.py~L736

                 [
                     {
                         "slice_id" if key == "chart_id" else key: value
-                        for key, value in ChartWarmUpCacheCommand(
-                            slc, dashboard_id, extra_filters
+                        for key, value in (
+                            ChartWarmUpCacheCommand(slc, dashboard_id, extra_filters)
+                            .run()
+                            .items()
                         )
-                        .run()
-                        .items()
                     }
                     for slc in slices
                 ],

superset/views/datasource/views.py~L101

             datasource_dict["owners"], default_to_user=False
         )
 
-        duplicates = [
-            name
-            for name, count in Counter([
+        duplicates = [name for name, count in Counter([
                 col["column_name"] for col in datasource_dict["columns"]
-            ]).items()
-            if count > 1
-        ]
+            ]).items() if count > 1]
         if duplicates:
             return json_error_response(
                 _(

superset/viz.py~L562

             try:
                 invalid_columns = [
                     col
-                    for col in get_column_names_from_columns(
-                        query_obj.get("columns") or []
-                    )
-                    + get_column_names_from_columns(query_obj.get("groupby") or [])
-                    + utils.get_column_names_from_metrics(
-                        cast(list[Metric], query_obj.get("metrics") or [])
+                    for col in (
+                        get_column_names_from_columns(query_obj.get("columns") or [])
+                        + get_column_names_from_columns(query_obj.get("groupby") or [])
+                        + utils.get_column_names_from_metrics(
+                            cast(list[Metric], query_obj.get("metrics") or [])
+                        )
                     )
                     if col not in self.datasource.column_names
                 ]

superset/viz.py~L2762

             dims = ()
         if level == -1:
             return [
-                {"name": m, "children": self.nest_procs(procs, 0, (m,))}
-                for m in procs[0].columns
+                {"name": m, "children": self.nest_procs(procs, 0, (m,))} for m in procs[
+                    0
+                ].columns
             ]
         if not level:
             return [

tests/integration_tests/charts/api_tests.py~L1545

         admin = self.get_user("admin")
         users_favorite_ids = [
             star.obj_id
-            for star in db.session.query(FavStar.obj_id)
-            .filter(
-                and_(
-                    FavStar.user_id == admin.id,
-                    FavStar.class_name == FavStarClassName.CHART,
+            for star in (
+                db.session.query(FavStar.obj_id)
+                .filter(
+                    and_(
+                        FavStar.user_id == admin.id,
+                        FavStar.class_name == FavStarClassName.CHART,
+                    )
                 )
+                .all()
             )
-            .all()
         ]
 
         assert users_favorite_ids

tests/integration_tests/dashboards/api_tests.py~L847

         admin = self.get_user("admin")
         users_favorite_ids = [
             star.obj_id
-            for star in db.session.query(FavStar.obj_id)
-            .filter(
-                and_(
-                    FavStar.user_id == admin.id,
-                    FavStar.class_name == FavStarClassName.DASHBOARD,
+            for star in (
+                db.session.query(FavStar.obj_id)
+                .filter(
+                    and_(
+                        FavStar.user_id == admin.id,
+                        FavStar.class_name == FavStarClassName.DASHBOARD,
+                    )
                 )
+                .all()
             )
-            .all()
         ]
 
         assert users_favorite_ids

tests/integration_tests/datasets/api_tests.py~L440

             },
         }
         if response["result"]["database"]["backend"] not in ("presto", "hive"):
-            assert {
-                k: v for k, v in response["result"].items() if k in expected_result
-            } == expected_result
+            assert {k: v for k, v in response[
+                    "result"
+                ].items() if k in expected_result} == expected_result
         assert len(response["result"]["columns"]) == 3
         assert len(response["result"]["metrics"]) == 2
 

binary-husky/gpt_academic (+6 -14 lines across 2 files)

ruff format --preview

crazy_functions/Document_Optimize.py~L770

 
         # 过滤支持的文件格式
         file_paths = [
-            f
-            for f in file_paths
-            if any(
-                f.lower().endswith(ext)
-                for ext in list(processor.paper_extractor.SUPPORTED_EXTENSIONS)
-                + [".json", ".csv", ".xlsx", ".xls"]
-            )
+            f for f in file_paths if any(f.lower().endswith(ext) for ext in list(
+                    processor.paper_extractor.SUPPORTED_EXTENSIONS
+                ) + [".json", ".csv", ".xlsx", ".xls"])
         ]
 
     if not file_paths:

crazy_functions/paper_fns/reduce_aigc.py~L977

 
         # 过滤支持的文件格式
         file_paths = [
-            f
-            for f in file_paths
-            if any(
-                f.lower().endswith(ext)
-                for ext in list(processor.paper_extractor.SUPPORTED_EXTENSIONS)
-                + [".json", ".csv", ".xlsx", ".xls"]
-            )
+            f for f in file_paths if any(f.lower().endswith(ext) for ext in list(
+                    processor.paper_extractor.SUPPORTED_EXTENSIONS
+                ) + [".json", ".csv", ".xlsx", ".xls"])
         ]
 
     if not file_paths:

freedomofpress/securedrop (+12 -13 lines across 2 files)

ruff format --preview

securedrop/models.py~L874

 
         # For seen indicators, we need to make sure one doesn't already exist
         # otherwise it'll hit a unique key conflict
-        already_seen_files = {
-            file.file_id for file in SeenFile.query.filter_by(journalist_id=deleted.id).all()
-        }
+        already_seen_files = {file.file_id for file in SeenFile.query.filter_by(
+                journalist_id=deleted.id
+            ).all()}
         for file in SeenFile.query.filter_by(journalist_id=self.id).all():
             if file.file_id in already_seen_files:
                 db.session.delete(file)

securedrop/models.py~L884

                 file.journalist_id = deleted.id
                 db.session.add(file)
 
-        already_seen_messages = {
-            message.message_id
-            for message in SeenMessage.query.filter_by(journalist_id=deleted.id).all()
-        }
+        already_seen_messages = {message.message_id for message in SeenMessage.query.filter_by(
+                journalist_id=deleted.id
+            ).all()}
         for message in SeenMessage.query.filter_by(journalist_id=self.id).all():
             if message.message_id in already_seen_messages:
                 db.session.delete(message)

securedrop/models.py~L895

                 message.journalist_id = deleted.id
                 db.session.add(message)
 
-        already_seen_replies = {
-            reply.reply_id for reply in SeenReply.query.filter_by(journalist_id=deleted.id).all()
-        }
+        already_seen_replies = {reply.reply_id for reply in SeenReply.query.filter_by(
+                journalist_id=deleted.id
+            ).all()}
         for reply in SeenReply.query.filter_by(journalist_id=self.id).all():
             if reply.reply_id in already_seen_replies:
                 db.session.delete(reply)

securedrop/tests/test_journalist_api.py~L432

             submission["filename"] for submission in response.json["submissions"]
         ]
 
-        expected_submissions = [
-            submission.filename for submission in test_submissions["source"].submissions
-        ]
+        expected_submissions = [submission.filename for submission in test_submissions[
+                "source"
+            ].submissions]
         assert observed_submissions == expected_submissions
 
 

ibis-project/ibis (+5 -9 lines across 2 files)

ruff format --preview

ibis/backends/pyspark/init.py~L389

         table_loc = self._to_sqlglot_table(database)
         catalog, db = self._to_catalog_db_tuple(table_loc)
         with self._active_catalog(catalog):
-            tables = [
-                row.tableName
-                for row in self._session.sql(
+            tables = [row.tableName for row in self._session.sql(
                     f"SHOW TABLES IN {db or self.current_database}"
-                ).collect()
-            ]
+                ).collect()]
         return self._filter_with_like(tables, like)
 
     def _wrap_udf_to_return_pandas(self, func, output_dtype):

ibis/backends/sql/datatypes.py~L1387

     dialect = "athena"
 
 
-TYPE_MAPPERS: dict[str, SqlglotType] = {
-    mapper.dialect: mapper
-    for mapper in set(get_subclasses(SqlglotType)) - {SqlglotType, BigQueryUDFType}
-}
+TYPE_MAPPERS: dict[str, SqlglotType] = {mapper.dialect: mapper for mapper in set(
+        get_subclasses(SqlglotType)
+    ) - {SqlglotType, BigQueryUDFType}}

langchain-ai/langchain (+21 -32 lines across 5 files)

ruff format --preview

libs/core/langchain_core/messages/block_translators/openai.py~L726

                 if "action" in block and isinstance(block["action"], dict):
                     if "sources" in block["action"]:
                         sources = block["action"]["sources"]
-                    web_search_call["args"] = {
-                        k: v for k, v in block["action"].items() if k != "sources"
-                    }
+                    web_search_call["args"] = {k: v for k, v in block[
+                            "action"
+                        ].items() if k != "sources"}
                 for key in block:
                     if key not in ("type", "id", "action", "status", "index"):
                         web_search_call[key] = block[key]

libs/core/langchain_core/messages/utils.py~L1261

                             f"{missing}. Full content block:\n\n{block}"
                         )
                         raise ValueError(err)
-                    if not any(
-                        tool_call["id"] == block["id"]
-                        for tool_call in cast("AIMessage", message).tool_calls
-                    ):
+                    if not any(tool_call["id"] == block["id"] for tool_call in cast(
+                            "AIMessage", message
+                        ).tool_calls):
                         oai_msg["tool_calls"] = oai_msg.get("tool_calls", [])
                         oai_msg["tool_calls"].append({
                             "type": "function",

libs/core/langchain_core/runnables/base.py~L599

         # Import locally to prevent circular import
         from langchain_core.prompts.base import BasePromptTemplate  # noqa: PLC0415
 
-        return [
-            node.data
-            for node in self.get_graph(config=config).nodes.values()
-            if isinstance(node.data, BasePromptTemplate)
-        ]
+        return [node.data for node in self.get_graph(
+                config=config
+            ).nodes.values() if isinstance(node.data, BasePromptTemplate)]
 
     def __or__(
         self,

libs/langchain/langchain_classic/chat_models/base.py~L604

 
     def _model_params(self, config: RunnableConfig | None) -> dict:
         config = ensure_config(config)
-        model_params = {
-            k.removeprefix(self._config_prefix): v
-            for k, v in config.get("configurable", {}).items()
-            if k.startswith(self._config_prefix)
-        }
+        model_params = {k.removeprefix(self._config_prefix): v for k, v in config.get(
+                "configurable", {}
+            ).items() if k.startswith(self._config_prefix)}
         if self._configurable_fields != "any":
             model_params = {
                 k: v for k, v in model_params.items() if k in self._configurable_fields

libs/langchain/langchain_classic/chat_models/base.py~L624

         config = RunnableConfig(**(config or {}), **cast("RunnableConfig", kwargs))
         model_params = self._model_params(config)
         remaining_config = {k: v for k, v in config.items() if k != "configurable"}
-        remaining_config["configurable"] = {
-            k: v
-            for k, v in config.get("configurable", {}).items()
-            if k.removeprefix(self._config_prefix) not in model_params
-        }
+        remaining_config["configurable"] = {k: v for k, v in config.get(
+                "configurable", {}
+            ).items() if k.removeprefix(self._config_prefix) not in model_params}
         queued_declarative_operations = list(self._queued_declarative_operations)
         if remaining_config:
             queued_declarative_operations.append(

libs/langchain_v1/langchain/chat_models/base.py~L559

 
     def _model_params(self, config: RunnableConfig | None) -> dict:
         config = ensure_config(config)
-        model_params = {
-            _remove_prefix(k, self._config_prefix): v
-            for k, v in config.get("configurable", {}).items()
-            if k.startswith(self._config_prefix)
-        }
+        model_params = {_remove_prefix(k, self._config_prefix): v for k, v in config.get(
+                "configurable", {}
+            ).items() if k.startswith(self._config_prefix)}
         if self._configurable_fields != "any":
             model_params = {k: v for k, v in model_params.items() if k in self._configurable_fields}
         return model_params

libs/langchain_v1/langchain/chat_models/base.py~L577

         config = RunnableConfig(**(config or {}), **cast("RunnableConfig", kwargs))
         model_params = self._model_params(config)
         remaining_config = {k: v for k, v in config.items() if k != "configurable"}
-        remaining_config["configurable"] = {
-            k: v
-            for k, v in config.get("configurable", {}).items()
-            if _remove_prefix(k, self._config_prefix) not in model_params
-        }
+        remaining_config["configurable"] = {k: v for k, v in config.get(
+                "configurable", {}
+            ).items() if _remove_prefix(k, self._config_prefix) not in model_params}
         queued_declarative_operations = list(self._queued_declarative_operations)
         if remaining_config:
             queued_declarative_operations.append(

latchbio/latch (+14 -20 lines across 2 files)

ruff format --preview

src/latch/utils.py~L128

             name=x["displayName"],
             default=x["accountId"] == default_account,
         )
-        for x in owned_teams
-        + member_teams
-        + (
-            [res["teamInfoByAccountId"]]
-            if res["teamInfoByAccountId"] is not None
-            else []
+        for x in (
+            owned_teams
+            + member_teams
+            + (
+                [res["teamInfoByAccountId"]]
+                if res["teamInfoByAccountId"] is not None
+                else []
+            )
+            + owned_org_teams
+            + member_org_teams
         )
-        + owned_org_teams
-        + member_org_teams
     }
 
     return teams

src/latch_cli/snakemake/serialize.py~L255

     )
     admin_lp = get_serializable_launch_plan(lp, settings, registrable_entity_cache)
 
-    registrable_entities = [
-        x.to_flyte_idl()
-        for x in list(
+    registrable_entities = [x.to_flyte_idl() for x in list(
             filter(should_register_with_admin, list(registrable_entity_cache.values()))
-        )
-        + [admin_lp]
-    ]
+        ) + [admin_lp]]
     for idx, entity in enumerate(registrable_entities):
         cur = spec_dir
 

src/latch_cli/snakemake/serialize.py~L308

     )
     admin_lp = get_serializable_launch_plan(lp, settings, registrable_entity_cache)
 
-    registrable_entities = [
-        x.to_flyte_idl()
-        for x in list(
+    registrable_entities = [x.to_flyte_idl() for x in list(
             filter(should_register_with_admin, list(registrable_entity_cache.values()))
-        )
-        + [admin_lp]
-    ]
+        ) + [admin_lp]]
 
     click.secho("\nSerializing workflow entities", bold=True)
     persist_registrable_entities(registrable_entities, output_dir)

lnbits/lnbits (+3 -5 lines across 1 file)

ruff format --preview

lnbits/core/views/extension_api.py~L456

     user: User = Depends(check_user_exists),
 ) -> list[Extension]:
     user_extensions_ids = [ue.extension for ue in await get_user_extensions(user.id)]
-    return [
-        ext
-        for ext in await get_valid_extensions(False)
-        if ext.code in user_extensions_ids
-    ]
+    return [ext for ext in await get_valid_extensions(
+            False
+        ) if ext.code in user_extensions_ids]
 
 
 @extension_router.delete(

mlflow/mlflow (+43 -43 lines across 6 files)

ruff format --preview

mlflow/dspy/util.py~L79

 
         lm = dspy.settings.lm
 
-        lm_attributes = {
-            key: value
-            for key, value in getattr(lm, "kwargs", {}).items()
-            if key not in {"api_key", "api_base"}
-        }
+        lm_attributes = {key: value for key, value in getattr(
+                lm, "kwargs", {}
+            ).items() if key not in {"api_key", "api_base"}}
 
         for attr in ["model", "model_type", "cache", "temperature", "max_tokens"]:
             value = getattr(lm, attr, None)

mlflow/store/tracking/sqlalchemy_store.py~L1801

     ) -> list[LoggedModelOutput]:
         return [
             LoggedModelOutput(model_id=output.destination_id, step=output.step)
-            for output in session.query(SqlInput)
-            .filter(
-                SqlInput.source_type == "RUN_OUTPUT",
-                SqlInput.source_id == run_id,
-                SqlInput.destination_type == "MODEL_OUTPUT",
+            for output in (
+                session.query(SqlInput)
+                .filter(
+                    SqlInput.source_type == "RUN_OUTPUT",
+                    SqlInput.source_id == run_id,
+                    SqlInput.destination_type == "MODEL_OUTPUT",
+                )
+                .all()
             )
-            .all()
         ]
 
     #######################################################################################

mlflow/store/tracking/sqlalchemy_store.py~L2050

             # First, get all scorer_ids for this experiment
             scorer_ids = [
                 scorer.scorer_id
-                for scorer in session.query(SqlScorer.scorer_id)
-                .filter(SqlScorer.experiment_id == experiment.experiment_id)
-                .all()
+                for scorer in (
+                    session.query(SqlScorer.scorer_id)
+                    .filter(SqlScorer.experiment_id == experiment.experiment_id)
+                    .all()
+                )
             ]
 
             if not scorer_ids:

mlflow/types/schema.py~L399

             not isinstance(prop, dict) for prop in kwargs["properties"].values()
         ):
             raise MlflowException("Expected properties to be a dictionary of Property JSON")
-        return cls([
-            Property.from_json_dict(**{name: prop}) for name, prop in kwargs["properties"].items()
-        ])
+        return cls([Property.from_json_dict(**{name: prop}) for name, prop in kwargs[
+                "properties"
+            ].items()])
 
     def _merge(self, other: BaseType) -> Object:
         """

tests/metrics/genai/test_genai_metrics.py~L160

 
 
 def test_make_genai_metric_correct_response(custom_metric):
-    assert [
-        param.name for param in inspect.signature(custom_metric.eval_fn).parameters.values()
-    ] == ["predictions", "metrics", "inputs", "targets"]
+    assert [param.name for param in inspect.signature(
+            custom_metric.eval_fn
+        ).parameters.values()] == ["predictions", "metrics", "inputs", "targets"]
 
     with mock.patch.object(
         model_utils,

tests/metrics/genai/test_genai_metrics.py~L275

         ],
     )
 
-    assert [
-        param.name for param in inspect.signature(custom_metric.eval_fn).parameters.values()
-    ] == ["predictions", "metrics", "inputs", "targets"]
+    assert [param.name for param in inspect.signature(
+            custom_metric.eval_fn
+        ).parameters.values()] == ["predictions", "metrics", "inputs", "targets"]
 
     with mock.patch.object(
         model_utils,

tests/pyfunc/test_pyfunc_model_with_type_hints.py~L339

         df = spark.createDataFrame(pd.DataFrame({"input": input_example}), schema=schema)
     df = df.withColumn("response", udf("input"))
     pdf = df.toPandas()
-    assert [
-        x.asDict(recursive=True) if isinstance(x, Row) else x for x in pdf["response"].tolist()
-    ] == input_example
+    assert [x.asDict(recursive=True) if isinstance(x, Row) else x for x in pdf[
+            "response"
+        ].tolist()] == input_example
 
 
 def test_pyfunc_model_with_no_op_type_hint_pass_signature_works():

tests/store/artifact/test_presigned_url_artifact_repo.py~L118

     remote_path = json.loads(kwargs["json_body"])["path"]
     return CreateDownloadUrlResponse(
         url=_make_presigned_url(remote_path),
-        headers=[
-            HttpHeader(name=header, value=val) for header, val in _make_headers(remote_path).items()
-        ],
+        headers=[HttpHeader(name=header, value=val) for header, val in _make_headers(
+                remote_path
+            ).items()],
     )
 
 

tests/store/artifact/test_presigned_url_artifact_repo.py~L163

     remote_path = json.loads(kwargs["json_body"])["path"]
     return CreateUploadUrlResponse(
         url=_make_presigned_url(remote_path),
-        headers=[
-            HttpHeader(name=header, value=val) for header, val in _make_headers(remote_path).items()
-        ],
+        headers=[HttpHeader(name=header, value=val) for header, val in _make_headers(
+                remote_path
+            ).items()],
     )
 
 

tests/store/artifact/test_presigned_url_artifact_repo.py~L208

             f"{PRESIGNED_URL_ARTIFACT_REPOSITORY}.PresignedUrlArtifactRepository._get_download_presigned_url_and_headers",
             return_value=CreateDownloadUrlResponse(
                 url=_make_presigned_url(remote_file_path),
-                headers=[
-                    HttpHeader(name=k, value=v) for k, v in _make_headers(remote_file_path).items()
-                ],
+                headers=[HttpHeader(name=k, value=v) for k, v in _make_headers(
+                        remote_file_path
+                    ).items()],
             ),
         ) as mock_request,
         mock.patch(

tests/store/artifact/test_presigned_url_artifact_repo.py~L253

     total_remote_path = f"{artifact_path}/{os.path.basename(local_file)}"
     creds = ArtifactCredentialInfo(
         signed_uri=_make_presigned_url(total_remote_path),
-        headers=[
-            ArtifactCredentialInfo.HttpHeader(name=k, value=v)
-            for k, v in _make_headers(total_remote_path).items()
-        ],
+        headers=[ArtifactCredentialInfo.HttpHeader(name=k, value=v) for k, v in _make_headers(
+                total_remote_path
+            ).items()],
     )
     with (
         mock.patch(

tests/store/artifact/test_presigned_url_artifact_repo.py~L295

     ):
         cred_info = ArtifactCredentialInfo(
             signed_uri=_make_presigned_url(remote_file_path),
-            headers=[
-                ArtifactCredentialInfo.HttpHeader(name=k, value=v)
-                for k, v in _make_headers(remote_file_path).items()
-            ],
+            headers=[ArtifactCredentialInfo.HttpHeader(name=k, value=v) for k, v in _make_headers(
+                    remote_file_path
+                ).items()],
         )
         artifact_repo._upload_to_cloud(cred_info, local_file, "some/irrelevant/path")
         mock_cloud.assert_called_once_with(

pandas-dev/pandas (+3 -7 lines across 1 file)

ruff format --preview

pandas/core/reshape/melt.py~L548

     If we have many columns, we could also use a regex to find our
     stubnames and pass that list on to wide_to_long
 
-    >>> stubnames = sorted(
-    ...     set([
-    ...         match[0]
-    ...         for match in df.columns.str.findall(r"[A-B]\(.*\)").values
-    ...         if match != []
-    ...     ])
-    ... )
+    >>> stubnames = sorted(set([match[0] for match in df.columns.str.findall(
+    ...             r"[A-B]\(.*\)"
+    ...         ).values if match != []]))
     >>> list(stubnames)
     ['A(weekly)', 'B(weekly)']
 

prefecthq/prefect (+31 -28 lines across 4 files)

ruff format --preview

src/integrations/prefect-kubernetes/tests/test_worker.py~L351

                                     "env": [
                                         *[
                                             {"name": k, "value": v}
-                                            for k, v in get_current_settings()
-                                            .to_environment_variables(
-                                                exclude_unset=True
+                                            for k, v in (
+                                                get_current_settings()
+                                                .to_environment_variables(
+                                                    exclude_unset=True
+                                                )
+                                                .items()
                                             )
-                                            .items()
                                         ],
                                         {
                                             "name": "PREFECT__FLOW_RUN_ID",

src/integrations/prefect-kubernetes/tests/test_worker.py~L667

                                     "env": [
                                         *[
                                             {"name": k, "value": v}
-                                            for k, v in get_current_settings()
-                                            .to_environment_variables(
-                                                exclude_unset=True
+                                            for k, v in (
+                                                get_current_settings()
+                                                .to_environment_variables(
+                                                    exclude_unset=True
+                                                )
+                                                .items()
                                             )
-                                            .items()
                                         ],
                                         {
                                             "name": "PREFECT__FLOW_RUN_ID",

src/integrations/prefect-kubernetes/tests/test_worker.py~L861

                                     "env": [
                                         *[
                                             {"name": k, "value": v}
-                                            for k, v in get_current_settings()
-                                            .to_environment_variables(
-                                                exclude_unset=True
+                                            for k, v in (
+                                                get_current_settings()
+                                                .to_environment_variables(
+                                                    exclude_unset=True
+                                                )
+                                                .items()
                                             )
-                                            .items()
                                         ],
                                         {
                                             "name": "PREFECT__FLOW_RUN_ID",

src/integrations/prefect-kubernetes/tests/test_worker.py~L1179

                                     "env": [
                                         *[
                                             {"name": k, "value": v}
-                                            for k, v in get_current_settings()
-                                            .to_environment_variables(
-                                                exclude_unset=True
+                                            for k, v in (
+                                                get_current_settings()
+                                                .to_environment_variables(
+                                                    exclude_unset=True
+                                                )
+                                                .items()
                                             )
-                                            .items()
                                         ],
                                         {
                                             "name": "PREFECT__FLOW_RUN_ID",

src/prefect/utilities/collections.py~L569

     """
     if not isinstance(obj, dict):
         return obj
-    return {
-        key: remove_nested_keys(keys_to_remove, value)
-        for key, value in cast(NestedDict[HashableT, VT], obj).items()
-        if key not in keys_to_remove
-    }
+    return {key: remove_nested_keys(keys_to_remove, value) for key, value in cast(
+            NestedDict[HashableT, VT], obj
+        ).items() if key not in keys_to_remove}
 
 
 @overload

tests/server/services/test_scheduler.py~L106

     deployment_with_active_schedules: schemas.core.Deployment,
 ):
     active_schedules = [
-        s.schedule
-        for s in await models.deployments.read_deployment_schedules(
+        s.schedule for s in await models.deployments.read_deployment_schedules(
             session=session,
             deployment_id=deployment_with_active_schedules.id,
             deployment_schedule_filter=schemas.filters.DeploymentScheduleFilter(

tests/test_settings.py~L644

     @pytest.mark.usefixtures("disable_hosted_api_server")
     def test_settings_to_environment_includes_all_settings_with_non_null_values(self):
         settings = Settings()
-        expected_names = {
-            s.name
-            for s in _get_settings_fields(Settings).values()
-            if s.value() is not None
-        }
+        expected_names = {s.name for s in _get_settings_fields(
+                Settings
+            ).values() if s.value() is not None}
         for name, metadata in SUPPORTED_SETTINGS.items():
             if metadata.get("legacy") and name in expected_names:
                 expected_names.remove(name)

python/mypy (+3 -3 lines across 1 file)

ruff format --preview

mypy/semanal.py~L1922

         self.check_type_alias_bases(bases)
 
         for tvd in tvar_defs:
-            if isinstance(tvd, TypeVarType) and any(
-                has_placeholder(t) for t in [tvd.upper_bound] + tvd.values
-            ):
+            if isinstance(tvd, TypeVarType) and any(has_placeholder(t) for t in [
+                    tvd.upper_bound
+                ] + tvd.values):
                 # Some type variable bounds or values are not ready, we need
                 # to re-analyze this class.
                 self.defer()

qdrant/qdrant-client (+6 -9 lines across 2 files)

ruff format --preview

qdrant_client/conversions/conversion.py~L78

     if "structValue" in value_:
         if "fields" not in value_["structValue"]:
             return {}
-        return dict(
-            (key, value_to_json(val))
-            for key, val in value_["structVa...*[Comment body truncated]*

@ntBre ntBre added formatter Related to the formatter preview Related to preview mode features labels Oct 20, 2025
/// }
/// ]
/// ```
fn needs_nested_parentheses(expr: &Expr) -> bool {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You probably want has_own_parentheses

@MichaReiser
Copy link
Member

I think the comment here is relevant for your work

// The reason to add parentheses is to avoid a syntax error when breaking an expression over multiple lines.
// Therefore, it is unnecessary to add an additional pair of parentheses if an outer expression
// is parenthesized. Unless, it's the `Parenthesize::IfBreaksParenthesizedNested` layout
// where parenthesizing nested `maybe_parenthesized_expression` is explicitly desired.
_ if f.context().node_level().is_parenthesized() => {
return if matches!(parenthesize, Parenthesize::IfBreaksParenthesizedNested) {
parenthesize_if_expands(&expression.format().with_options(Parentheses::Never))
.with_indent(!is_expression_huggable(expression, f.context()))
.fmt(f)
} else {
expression.format().with_options(Parentheses::Never).fmt(f)
};

I'm also not sure if we should implement this preview style, as @dylwil3 pointed out in #20482 (comment) because it might fall into the same category as #12856 where Black now starts introducing otherwise unnecessary parentheses. Are there alternative formattings that we could use that avoid the need for inserting parentheses?

@MichaReiser
Copy link
Member

MichaReiser commented Oct 21, 2025

Reading through the issue, the main concern seems to be that very long attribute chains aren't split. That makes me wonder if the proper fix instead is to split attribute expressions in parenthesized expressions. I suspect this would be a bigger change and we probably want to give attribute chains a very low split priority. Or we could decide to only indent the content rather than adding an extra pair of parentheses?

ntBre added 10 commits October 21, 2025 12:06
Summary
--

This PR implements the `wrap_comprehension_in` style added in
psf/black#4699. This wraps `in` clauses in
comprehensions if they get too long. Using some examples from the upstream
issue, this code:

```py
[a for graph_path_expression in refined_constraint.condition_as_predicate.variables]

[
    a
    for graph_path_expression
    in refined_constraint.condition_as_predicate.variables
]
```

is currently formatted to:

```py
[
    a
    for graph_path_expression in refined_constraint.condition_as_predicate.variables
]

[
    a
    for graph_path_expression in refined_constraint.condition_as_predicate.variables
]
```

even if the second line of the comprehension exceeds the configured line length.

In preview, black will now break these lines by parenthesizing the expression
following `in`:

```py
[
    a
    for graph_path_expression in (
        refined_constraint.condition_as_predicate.variables
    )
]

[
    a
    for graph_path_expression in (
        refined_constraint.condition_as_predicate.variables
    )
]
```

I actually kind of like the alternative formatting mentioned on the original
Black issue and in our #12870 which would be more like:

```py
[
    a
    for graph_path_expression
	in refined_constraint.condition_as_predicate.variables
]
```

but I think I'm in the minority there.

Test Plan
--

Existing Black compatibility tests showing fewer differences
```py
[a for graph_path_expression in refined_constraint.condition_as_predicate.variables]
```

```shell
$ cargo run -p ruff -- format --check --preview --no-cache --config "line-length=79" fmt.py
unformatted: File would be reformatted
 --> fmt.py:1:1
  - [a for graph_path_expression in refined_constraint.condition_as_predicate.variables]
1 + [
2 +     a
3 +     for graph_path_expression in (
4 +         refined_constraint.condition_as_predicate.variables
5 +     )
6 + ]
```
@ntBre ntBre force-pushed the brent/wrap-comprehension-in branch from d109f48 to e18ead1 Compare October 21, 2025 16:36
ntBre added a commit that referenced this pull request Oct 21, 2025
…21021)

## Summary

I spun this out from #21005 because I thought it might be helpful
separately. It just renders a nice `Diagnostic` for syntax errors
pointing to the source of the error. This seemed a bit more helpful to
me than just the byte offset when working on #21005, and we had most of
the code around after #20443 anyway.

## Test Plan

This doesn't actually affect any passing tests, but here's an example of
the additional output I got when I broke the spacing after the `in`
token:

```
    error[internal-error]: Expected 'in', found name
      --> /home/brent/astral/ruff/crates/ruff_python_formatter/resources/test/fixtures/black/cases/cantfit.py:50:79
       |
    48 |     need_more_to_make_the_line_long_enough,
    49 | )
    50 | del ([], name_1, name_2), [(), [], name_4, name_3], name_1[[name_2 for name_1 inname_0]]
       |                                                                               ^^^^^^^^
    51 | del ()
       |
```

I just appended this to the other existing output for now.
@ntBre
Copy link
Contributor Author

ntBre commented Oct 21, 2025

Thanks for the pointers here and in our 1:1! The draft is in a slightly better state now, at least in terms of the tests and the ecosystem results. However, I'm now splitting expressions like this too eagerly on the in:

unformatted: File would be reformatted
 --> /tmp/tmp.h5wCjfHytw/try.py:1:1
2 | async def api_get_user_extensions(
3 |     user: User = Depends(check_user_exists),
4 | ) -> list[Extension]:
  - 
  -     user_extensions_ids = [
  -         ue.extension for ue in await get_user_extensions(user.id)
  -     ]
  -     return [
  -         ext
  -         for ext in await get_valid_extensions(False)
  -         if ext.code in user_extensions_ids
  -     ]
5 +     user_extensions_ids = [ue.extension for ue in await get_user_extensions(user.id)]
6 +     return [ext for ext in await get_valid_extensions(
7 +             False
8 +         ) if ext.code in user_extensions_ids]

Similar cases make up all of the ecosystem results I've looked at.

Is there an easy way to avoid that? It does seem to be something with can_omit_optional_parentheses right where you linked me on Discord. The split priority you mentioned above sounded perfect, but I didn't turn up anything with grep.

I will probably move on to another preview style as you and Dylan said, unless this is looking promising. It seems like there are other designs worth exploring in the future anyway.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

formatter Related to the formatter preview Related to preview mode features

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants