fix `collections.abc.Callable` and `typing.Callable`
typing.Callable is not a _SpecialForm, it's an _Alias, and collections.abc.Callable is a class
i is trying to address many issues with Callable that are either 100% special cased, or don't currently work
from collections.abc import Callable
def f(t: type[object]): ...
f(Callable)
class C(Callable): ...
a: Callable
a.__call__
status
undecided on how to proceed, two options present:
- fix
typing.Callableand introduce_collections_abc._Callable[**P, R]. this is less disruptive (i think) - fix
typing.Callableand introduce_collections_abc.Callable[**P, R], this is potentially more disruptive, but more correct
Diff from mypy_primer, showing the effect of this PR on open source code:
CPython (cases_generator) (https://github.com/python/cpython)
+ Tools/cases_generator/uop_metadata_generator.py:75: error: Missing positional argument "name" in call to "__call__" of "Callable" [call-arg]
+ Tools/cases_generator/uop_metadata_generator.py:75: error: Argument 1 to "__call__" of "Callable" has incompatible type "str"; expected "CWriter" [arg-type]
+ Tools/cases_generator/uop_id_generator.py:29: error: Missing positional argument "name" in call to "__call__" of "Callable" [call-arg]
+ Tools/cases_generator/uop_id_generator.py:29: error: Argument 1 to "__call__" of "Callable" has incompatible type "str"; expected "CWriter" [arg-type]
+ Tools/cases_generator/opcode_metadata_generator.py:365: error: Missing positional argument "name" in call to "__call__" of "Callable" [call-arg]
+ Tools/cases_generator/opcode_metadata_generator.py:365: error: Argument 1 to "__call__" of "Callable" has incompatible type "str"; expected "CWriter" [arg-type]
+ Tools/cases_generator/opcode_id_generator.py:29: error: Missing positional argument "name" in call to "__call__" of "Callable" [call-arg]
+ Tools/cases_generator/opcode_id_generator.py:29: error: Argument 1 to "__call__" of "Callable" has incompatible type "str"; expected "CWriter" [arg-type]
pandera (https://github.com/pandera-dev/pandera)
- pandera/typing/common.py:234: error: Argument 1 to "signature" has incompatible type "Any | None"; expected "Callable[..., Any]" [arg-type]
+ pandera/typing/common.py:234: error: Argument 1 to "signature" has incompatible type "Any | None"; expected "_collections_abc.Callable[[VarArg(Any), KwArg(Any)], Any]" [arg-type]
+ pandera/typing/common.py:234: note: "_collections_abc.Callable[[VarArg(Any), KwArg(Any)], Any].__call__" has type "Callable[[VarArg(Any), KwArg(Any)], Any]"
- pandera/strategies/pandas_strategies.py:68: note: def mask(self, cond: Series[Any] | Series[bool] | ndarray[tuple[Any, ...], dtype[Any]] | Callable[[Series[Any]], Series[bool]] | Callable[[Any], bool], other: str | bytes | date | datetime | timedelta | <15 more items> | None = ..., *, inplace: Literal[True], axis: Literal['index', 0] | None = ..., level: Hashable | None = ...) -> None
+ pandera/strategies/pandas_strategies.py:68: note: def mask(self, cond: Series[Any] | Series[bool] | ndarray[tuple[Any, ...], dtype[Any]] | _collections_abc.Callable[[Series[Any]], Series[bool]] | _collections_abc.Callable[[Any], bool], other: str | bytes | date | datetime | timedelta | <15 more items> | None = ..., *, inplace: Literal[True], axis: Literal['index', 0] | None = ..., level: Hashable | None = ...) -> None
- pandera/strategies/pandas_strategies.py:68: note: def mask(self, cond: Series[Any] | Series[bool] | ndarray[tuple[Any, ...], dtype[Any]] | Callable[[Series[Any]], Series[bool]] | Callable[[Any], bool], other: str | bytes | date | datetime | timedelta | <15 more items> | None = ..., *, inplace: Literal[False] = ..., axis: Literal['index', 0] | None = ..., level: Hashable | None = ...) -> Series[Any]
+ pandera/strategies/pandas_strategies.py:68: note: def mask(self, cond: Series[Any] | Series[bool] | ndarray[tuple[Any, ...], dtype[Any]] | _collections_abc.Callable[[Series[Any]], Series[bool]] | _collections_abc.Callable[[Any], bool], other: str | bytes | date | datetime | timedelta | <15 more items> | None = ..., *, inplace: Literal[False] = ..., axis: Literal['index', 0] | None = ..., level: Hashable | None = ...) -> Series[Any]
- pandera/strategies/pandas_strategies.py:70: note: def mask(self, cond: Series[Any] | Series[bool] | ndarray[tuple[Any, ...], dtype[Any]] | Callable[[Series[Any]], Series[bool]] | Callable[[Any], bool], other: str | bytes | date | datetime | timedelta | <15 more items> | None = ..., *, inplace: Literal[True], axis: Literal['index', 0] | None = ..., level: Hashable | None = ...) -> None
+ pandera/strategies/pandas_strategies.py:70: note: def mask(self, cond: Series[Any] | Series[bool] | ndarray[tuple[Any, ...], dtype[Any]] | _collections_abc.Callable[[Series[Any]], Series[bool]] | _collections_abc.Callable[[Any], bool], other: str | bytes | date | datetime | timedelta | <15 more items> | None = ..., *, inplace: Literal[True], axis: Literal['index', 0] | None = ..., level: Hashable | None = ...) -> None
- pandera/strategies/pandas_strategies.py:70: note: def mask(self, cond: Series[Any] | Series[bool] | ndarray[tuple[Any, ...], dtype[Any]] | Callable[[Series[Any]], Series[bool]] | Callable[[Any], bool], other: str | bytes | date | datetime | timedelta | <15 more items> | None = ..., *, inplace: Literal[False] = ..., axis: Literal['index', 0] | None = ..., level: Hashable | None = ...) -> Series[Any]
+ pandera/strategies/pandas_strategies.py:70: note: def mask(self, cond: Series[Any] | Series[bool] | ndarray[tuple[Any, ...], dtype[Any]] | _collections_abc.Callable[[Series[Any]], Series[bool]] | _collections_abc.Callable[[Any], bool], other: str | bytes | date | datetime | timedelta | <15 more items> | None = ..., *, inplace: Literal[False] = ..., axis: Literal['index', 0] | None = ..., level: Hashable | None = ...) -> Series[Any]
- pandera/strategies/pandas_strategies.py:71: note: def mask(self, cond: Series[Any] | Series[bool] | ndarray[tuple[Any, ...], dtype[Any]] | Callable[[Series[Any]], Series[bool]] | Callable[[Any], bool], other: str | bytes | date | datetime | timedelta | <15 more items> | None = ..., *, inplace: Literal[True], axis: Literal['index', 0] | None = ..., level: Hashable | None = ...) -> None
+ pandera/strategies/pandas_strategies.py:71: note: def mask(self, cond: Series[Any] | Series[bool] | ndarray[tuple[Any, ...], dtype[Any]] | _collections_abc.Callable[[Series[Any]], Series[bool]] | _collections_abc.Callable[[Any], bool], other: str | bytes | date | datetime | timedelta | <15 more items> | None = ..., *, inplace: Literal[True], axis: Literal['index', 0] | None = ..., level: Hashable | None = ...) -> None
- pandera/strategies/pandas_strategies.py:71: note: def mask(self, cond: Series[Any] | Series[bool] | ndarray[tuple[Any, ...], dtype[Any]] | Callable[[Series[Any]], Series[bool]] | Callable[[Any], bool], other: str | bytes | date | datetime | timedelta | <15 more items> | None = ..., *, inplace: Literal[False] = ..., axis: Literal['index', 0] | None = ..., level: Hashable | None = ...) -> Series[Any]
+ pandera/strategies/pandas_strategies.py:71: note: def mask(self, cond: Series[Any] | Series[bool] | ndarray[tuple[Any, ...], dtype[Any]] | _collections_abc.Callable[[Series[Any]], Series[bool]] | _collections_abc.Callable[[Any], bool], other: str | bytes | date | datetime | timedelta | <15 more items> | None = ..., *, inplace: Literal[False] = ..., axis: Literal['index', 0] | None = ..., level: Hashable | None = ...) -> Series[Any]
- tests/mypy/pandas_modules/pandas_dataframe.py:35: note: def [P`9763, T] pipe(self, func: Callable[[DataFrame[Schema], **P], T], *args: P.args, **kwargs: P.kwargs) -> T
+ tests/mypy/pandas_modules/pandas_dataframe.py:35: note: def [P`9640, T] pipe(self, func: _collections_abc.Callable[[DataFrame[Schema], **P], T], *args: P.args, **kwargs: P.kwargs) -> T
- tests/mypy/pandas_modules/pandas_dataframe.py:35: note: def [T] pipe(self, func: tuple[Callable[..., T], str], *args: Any, **kwargs: Any) -> T
+ tests/mypy/pandas_modules/pandas_dataframe.py:35: note: def [T] pipe(self, func: tuple[_collections_abc.Callable[[VarArg(Any), KwArg(Any)], T], str], *args: Any, **kwargs: Any) -> T
spark (https://github.com/apache/spark)
+ python/pyspark/profiler.py:197: error: Need type annotation for "subcode" [var-annotated]
+ python/pyspark/profiler.py:216: error: Need type annotation for "subcode" [var-annotated]
+ python/pyspark/sql/types.py:2368: error: Cannot infer value of type parameter "_T" of "reduce" [misc]
- python/pyspark/sql/pandas/types.py:685: error: Argument 1 to "apply" of "Series" has incompatible type "Callable[[Any], Any | NaTType]"; expected "Callable[..., str | bytes | date | datetime | timedelta | <16 more items> | None]" [arg-type]
+ python/pyspark/sql/pandas/types.py:685: error: Argument 1 to "apply" of "Series" has incompatible type "Callable[[Any], Any | NaTType]"; expected "_collections_abc.Callable[[VarArg(Any), KwArg(Any)], str | bytes | date | datetime | timedelta | <16 more items> | None]" [arg-type]
- python/pyspark/sql/pandas/types.py:685: error: Incompatible return value type (got "Any | NaTType", expected "str | bytes | date | datetime | timedelta | <16 more items> | None") [return-value]
+ python/pyspark/sql/pandas/types.py:685: note: "_collections_abc.Callable[[VarArg(Any), KwArg(Any)], str | bytes | date | datetime | timedelta | <16 more items> | None].__call__" has type "Callable[[VarArg(Any), KwArg(Any)], str | bytes | date | datetime | timedelta | datetime64[date | int | None] | timedelta64[timedelta | int | None] | bool | int | float | Timestamp | Timedelta | complex | integer[Any] | floating[Any] | complexfloating[Any, Any] | Sequence[Any] | set[Any] | Mapping[Any, Any] | NAType | frozenset[Any] | None]"
+ python/pyspark/sql/classic/dataframe.py:724: error: Item "str" of "str | Any | list[str] | Column | list[Column]" has no attribute "_jc" [union-attr]
+ python/pyspark/sql/classic/dataframe.py:724: error: Item "list[str]" of "str | Any | list[str] | Column | list[Column]" has no attribute "_jc" [union-attr]
+ python/pyspark/sql/classic/dataframe.py:724: error: Item "list[Column]" of "str | Any | list[str] | Column | list[Column]" has no attribute "_jc" [union-attr]
+ python/pyspark/sql/classic/dataframe.py:853: error: Item "str" of "str | Any | list[str] | Column | list[Column]" has no attribute "_jc" [union-attr]
+ python/pyspark/sql/classic/dataframe.py:853: error: Item "list[str]" of "str | Any | list[str] | Column | list[Column]" has no attribute "_jc" [union-attr]
+ python/pyspark/sql/classic/dataframe.py:853: error: Item "list[Column]" of "str | Any | list[str] | Column | list[Column]" has no attribute "_jc" [union-attr]
- python/pyspark/pandas/frame.py:3187: note: def apply(self, f: Callable[..., MutableSequence[Any] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Any, ...] | Index[Any] | Series[Any]], axis: Literal['index', 0] = ..., raw: bool = ..., result_type: None = ..., args: Any = ..., **kwargs: Any) -> DataFrame
+ python/pyspark/pandas/frame.py:3187: note: def apply(self, f: _collections_abc.Callable[[VarArg(Any), KwArg(Any)], MutableSequence[Any] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Any, ...] | Index[Any] | Series[Any]], axis: Literal['index', 0] = ..., raw: bool = ..., result_type: None = ..., args: Any = ..., **kwargs: Any) -> DataFrame
- python/pyspark/pandas/frame.py:3187: note: def [S2: str | bytes | date | time | bool | <11 more items>] apply(self, f: Callable[..., S2 | NAType], axis: Literal['index', 0] = ..., raw: bool = ..., result_type: None = ..., args: Any = ..., **kwargs: Any) -> Series[S2]
+ python/pyspark/pandas/frame.py:3187: note: def [S2: str | bytes | date | time | bool | <11 more items>] apply(self, f: _collections_abc.Callable[[VarArg(Any), KwArg(Any)], S2 | NAType], axis: Literal['index', 0] = ..., raw: bool = ..., result_type: None = ..., args: Any = ..., **kwargs: Any) -> Series[S2]
- python/pyspark/pandas/frame.py:3187: note: def apply(self, f: Callable[..., Mapping[Any, Any]], axis: Literal['index', 0] = ..., raw: bool = ..., result_type: None = ..., args: Any = ..., **kwargs: Any) -> Series[Any]
+ python/pyspark/pandas/frame.py:3187: note: def apply(self, f: _collections_abc.Callable[[VarArg(Any), KwArg(Any)], Mapping[Any, Any]], axis: Literal['index', 0] = ..., raw: bool = ..., result_type: None = ..., args: Any = ..., **kwargs: Any) -> Series[Any]
- python/pyspark/pandas/frame.py:3187: note: def [S2: str | bytes | date | time | bool | <11 more items>] apply(self, f: Callable[..., S2 | NAType], axis: Literal['index', 0] | Literal['columns', 1] = ..., raw: bool = ..., args: Any = ..., *, result_type: Literal['expand', 'reduce'], **kwargs: Any) -> Series[S2]
+ python/pyspark/pandas/frame.py:3187: note: def [S2: str | bytes | date | time | bool | <11 more items>] apply(self, f: _collections_abc.Callable[[VarArg(Any), KwArg(Any)], S2 | NAType], axis: Literal['index', 0] | Literal['columns', 1] = ..., raw: bool = ..., args: Any = ..., *, result_type: Literal['expand', 'reduce'], **kwargs: Any) -> Series[S2]
- python/pyspark/pandas/frame.py:3187: note: def apply(self, f: Callable[..., MutableSequence[Any] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Any, ...] | Index[Any] | Series[Any] | Mapping[Any, Any]], axis: Literal['index', 0] | Literal['columns', 1] = ..., raw: bool = ..., args: Any = ..., *, result_type: Literal['expand'], **kwargs: Any) -> DataFrame
+ python/pyspark/pandas/frame.py:3187: note: def apply(self, f: _collections_abc.Callable[[VarArg(Any), KwArg(Any)], MutableSequence[Any] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Any, ...] | Index[Any] | Series[Any] | Mapping[Any, Any]], axis: Literal['index', 0] | Literal['columns', 1] = ..., raw: bool = ..., args: Any = ..., *, result_type: Literal['expand'], **kwargs: Any) -> DataFrame
- python/pyspark/pandas/frame.py:3187: note: def apply(self, f: Callable[..., MutableSequence[Any] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Any, ...] | Index[Any] | Mapping[Any, Any]], axis: Literal['index', 0] | Literal['columns', 1] = ..., raw: bool = ..., args: Any = ..., *, result_type: Literal['reduce'], **kwargs: Any) -> Series[Any]
+ python/pyspark/pandas/frame.py:3187: note: def apply(self, f: _collections_abc.Callable[[VarArg(Any), KwArg(Any)], MutableSequence[Any] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Any, ...] | Index[Any] | Mapping[Any, Any]], axis: Literal['index', 0] | Literal['columns', 1] = ..., raw: bool = ..., args: Any = ..., *, result_type: Literal['reduce'], **kwargs: Any) -> Series[Any]
- python/pyspark/pandas/frame.py:3187: note: def apply(self, f: Callable[..., MutableSequence[Any] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Any, ...] | Index[Any] | Series[Any] | <17 more items>], axis: Literal['index', 0] | Literal['columns', 1] = ..., raw: bool = ..., args: Any = ..., *, result_type: Literal['broadcast'], **kwargs: Any) -> DataFrame
+ python/pyspark/pandas/frame.py:3187: note: def apply(self, f: _collections_abc.Callable[[VarArg(Any), KwArg(Any)], MutableSequence[Any] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Any, ...] | Index[Any] | Series[Any] | <17 more items>], axis: Literal['index', 0] | Literal['columns', 1] = ..., raw: bool = ..., args: Any = ..., *, result_type: Literal['broadcast'], **kwargs: Any) -> DataFrame
- python/pyspark/pandas/frame.py:3187: note: def apply(self, f: Callable[..., Series[Any]], axis: Literal['index', 0] = ..., raw: bool = ..., args: Any = ..., *, result_type: Literal['reduce'], **kwargs: Any) -> Series[Any]
+ python/pyspark/pandas/frame.py:3187: note: def apply(self, f: _collections_abc.Callable[[VarArg(Any), KwArg(Any)], Series[Any]], axis: Literal['index', 0] = ..., raw: bool = ..., args: Any = ..., *, result_type: Literal['reduce'], **kwargs: Any) -> Series[Any]
- python/pyspark/pandas/frame.py:3187: note: def [S2: str | bytes | date | time | bool | <11 more items>] apply(self, f: Callable[..., S2 | NAType], raw: bool = ..., result_type: None = ..., args: Any = ..., *, axis: Literal['columns', 1], **kwargs: Any) -> Series[S2]
+ python/pyspark/pandas/frame.py:3187: note: def [S2: str | bytes | date | time | bool | <11 more items>] apply(self, f: _collections_abc.Callable[[VarArg(Any), KwArg(Any)], S2 | NAType], raw: bool = ..., result_type: None = ..., args: Any = ..., *, axis: Literal['columns', 1], **kwargs: Any) -> Series[S2]
- python/pyspark/pandas/frame.py:3187: note: def apply(self, f: Callable[..., MutableSequence[Any] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Any, ...] | Index[Any] | Mapping[Any, Any]], raw: bool = ..., result_type: None = ..., args: Any = ..., *, axis: Literal['columns', 1], **kwargs: Any) -> Series[Any]
+ python/pyspark/pandas/frame.py:3187: note: def apply(self, f: _collections_abc.Callable[[VarArg(Any), KwArg(Any)], MutableSequence[Any] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Any, ...] | Index[Any] | Mapping[Any, Any]], raw: bool = ..., result_type: None = ..., args: Any = ..., *, axis: Literal['columns', 1], **kwargs: Any) -> Series[Any]
- python/pyspark/pandas/frame.py:3187: note: def apply(self, f: Callable[..., Series[Any]], raw: bool = ..., result_type: None = ..., args: Any = ..., *, axis: Literal['columns', 1], **kwargs: Any) -> DataFrame
+ python/pyspark/pandas/frame.py:3187: note: def apply(self, f: _collections_abc.Callable[[VarArg(Any), KwArg(Any)], Series[Any]], raw: bool = ..., result_type: None = ..., args: Any = ..., *, axis: Literal['columns', 1], **kwargs: Any) -> DataFrame
- python/pyspark/pandas/frame.py:3187: note: def apply(self, f: Callable[..., Series[Any]], raw: bool = ..., args: Any = ..., *, axis: Literal['columns', 1], result_type: Literal['reduce'], **kwargs: Any) -> DataFrame
+ python/pyspark/pandas/frame.py:3187: note: def apply(self, f: _collections_abc.Callable[[VarArg(Any), KwArg(Any)], Series[Any]], raw: bool = ..., args: Any = ..., *, axis: Literal['columns', 1], result_type: Literal['reduce'], **kwargs: Any) -> DataFrame
- python/pyspark/pandas/frame.py:3205: note: def apply(self, f: Callable[..., MutableSequence[Any] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Any, ...] | Index[Any] | Series[Any]], axis: Literal['index', 0] = ..., raw: bool = ..., result_type: None = ..., args: Any = ..., **kwargs: Any) -> DataFrame
+ python/pyspark/pandas/frame.py:3205: note: def apply(self, f: _collections_abc.Callable[[VarArg(Any), KwArg(Any)], MutableSequence[Any] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Any, ...] | Index[Any] | Series[Any]], axis: Literal['index', 0] = ..., raw: bool = ..., result_type: None = ..., args: Any = ..., **kwargs: Any) -> DataFrame
- python/pyspark/pandas/frame.py:3205: note: def [S2: str | bytes | date | time | bool | <11 more items>] apply(self, f: Callable[..., S2 | NAType], axis: Literal['index', 0] = ..., raw: bool = ..., result_type: None = ..., args: Any = ..., **kwargs: Any) -> Series[S2]
+ python/pyspark/pandas/frame.py:3205: note: def [S2: str | bytes | date | time | bool | <11 more items>] apply(self, f: _collections_abc.Callable[[VarArg(Any), KwArg(Any)], S2 | NAType], axis: Literal['index', 0] = ..., raw: bool = ..., result_type: None = ..., args: Any = ..., **kwargs: Any) -> Series[S2]
- python/pyspark/pandas/frame.py:3205: note: def apply(self, f: Callable[..., Mapping[Any, Any]], axis: Literal['index', 0] = ..., raw: bool = ..., result_type: None = ..., args: Any = ..., **kwargs: Any) -> Series[Any]
+ python/pyspark/pandas/frame.py:3205: note: def apply(self, f: _collections_abc.Callable[[VarArg(Any), KwArg(Any)], Mapping[Any, Any]], axis: Literal['index', 0] = ..., raw: bool = ..., result_type: None = ..., args: Any = ..., **kwargs: Any) -> Series[Any]
- python/pyspark/pandas/frame.py:3205: note: def [S2: str | bytes | date | time | bool | <11 more items>] apply(self, f: Callable[..., S2 | NAType], axis: Literal['index', 0] | Literal['columns', 1] = ..., raw: bool = ..., args: Any = ..., *, result_type: Literal['expand', 'reduce'], **kwargs: Any) -> Series[S2]
+ python/pyspark/pandas/frame.py:3205: note: def [S2: str | bytes | date | time | bool | <11 more items>] apply(self, f: _collections_abc.Callable[[VarArg(Any), KwArg(Any)], S2 | NAType], axis: Literal['index', 0] | Literal['columns', 1] = ..., raw: bool = ..., args: Any = ..., *, result_type: Literal['expand', 'reduce'], **kwargs: Any) -> Series[S2]
- python/pyspark/pandas/frame.py:3205: note: def apply(self, f: Callable[..., MutableSequence[Any] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Any, ...] | Index[Any] | Series[Any] | Mapping[Any, Any]], axis: Literal['index', 0] | Literal['columns', 1] = ..., raw: bool = ..., args: Any = ..., *, result_type: Literal['expand'], **kwargs: Any) -> DataFrame
+ python/pyspark/pandas/frame.py:3205: note: def apply(self, f: _collections_abc.Callable[[VarArg(Any), KwArg(Any)], MutableSequence[Any] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Any, ...] | Index[Any] | Series[Any] | Mapping[Any, Any]], axis: Literal['index', 0] | Literal['columns', 1] = ..., raw: bool = ..., args: Any = ..., *, result_type: Literal['expand'], **kwargs: Any) -> DataFrame
- python/pyspark/pandas/frame.py:3205: note: def apply(self, f: Callable[..., MutableSequence[Any] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Any, ...] | Index[Any] | Mapping[Any, Any]], axis: Literal['index', 0] | Literal['columns', 1] = ..., raw: bool = ..., args: Any = ..., *, result_type: Literal['reduce'], **kwargs: Any) -> Series[Any]
+ python/pyspark/pandas/frame.py:3205: note: def apply(self, f: _collections_abc.Callable[[VarArg(Any), KwArg(Any)], MutableSequence[Any] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Any, ...] | Index[Any] | Mapping[Any, Any]], axis: Literal['index', 0] | Literal['columns', 1] = ..., raw: bool = ..., args: Any = ..., *, result_type: Literal['reduce'], **kwargs: Any) -> Series[Any]
- python/pyspark/pandas/frame.py:3205: note: def apply(self, f: Callable[..., MutableSequence[Any] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Any, ...] | Index[Any] | Series[Any] | <17 more items>], axis: Literal['index', 0] | Literal['columns', 1] = ..., raw: bool = ..., args: Any = ..., *, result_type: Literal['broadcast'], **kwargs: Any) -> DataFrame
+ python/pyspark/pandas/frame.py:3205: note: def apply(self, f: _collections_abc.Callable[[VarArg(Any), KwArg(Any)], MutableSequence[Any] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Any, ...] | Index[Any] | Series[Any] | <17 more items>], axis: Literal['index', 0] | Literal['columns', 1] = ..., raw: bool = ..., args: Any = ..., *, result_type: Literal['broadcast'], **kwargs: Any) -> DataFrame
- python/pyspark/pandas/frame.py:3205: note: def apply(self, f: Callable[..., Series[Any]], axis: Literal['index', 0] = ..., raw: bool = ..., args: Any = ..., *, result_type: Literal['reduce'], **kwargs: Any) -> Series[Any]
+ python/pyspark/pandas/frame.py:3205: note: def apply(self, f: _collections_abc.Callable[[VarArg(Any), KwArg(Any)], Series[Any]], axis: Literal['index', 0] = ..., raw: bool = ..., args: Any = ..., *, result_type: Literal['reduce'], **kwargs: Any) -> Series[Any]
- python/pyspark/pandas/frame.py:3205: note: def [S2: str | bytes | date | time | bool | <11 more items>] apply(self, f: Callable[..., S2 | NAType], raw: bool = ..., result_type: None = ..., args: Any = ..., *, axis: Literal['columns', 1], **kwargs: Any) -> Series[S2]
+ python/pyspark/pandas/frame.py:3205: note: def [S2: str | bytes | date | time | bool | <11 more items>] apply(self, f: _collections_abc.Callable[[VarArg(Any), KwArg(Any)], S2 | NAType], raw: bool = ..., result_type: None = ..., args: Any = ..., *, axis: Literal['columns', 1], **kwargs: Any) -> Series[S2]
- python/pyspark/pandas/frame.py:3205: note: def apply(self, f: Callable[..., MutableSequence[Any] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Any, ...] | Index[Any] | Mapping[Any, Any]], raw: bool = ..., result_type: None = ..., args: Any = ..., *, axis: Literal['columns', 1], **kwargs: Any) -> Series[Any]
+ python/pyspark/pandas/frame.py:3205: note: def apply(self, f: _collections_abc.Callable[[VarArg(Any), KwArg(Any)], MutableSequence[Any] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Any, ...] | Index[Any] | Mapping[Any, Any]], raw: bool = ..., result_type: None = ..., args: Any = ..., *, axis: Literal['columns', 1], **kwargs: Any) -> Series[Any]
- python/pyspark/pandas/frame.py:3205: note: def apply(self, f: Callable[..., Series[Any]], raw: bool = ..., result_type: None = ..., args: Any = ..., *, axis: Literal['columns', 1], **kwargs: Any) -> DataFrame
+ python/pyspark/pandas/frame.py:3205: note: def apply(self, f: _collections_abc.Callable[[VarArg(Any), KwArg(Any)], Series[Any]], raw: bool = ..., result_type: None = ..., args: Any = ..., *, axis: Literal['columns', 1], **kwargs: Any) -> DataFrame
- python/pyspark/pandas/frame.py:3205: note: def apply(self, f: Callable[..., Series[Any]], raw: bool = ..., args: Any = ..., *, axis: Literal['columns', 1], result_type: Literal['reduce'], **kwargs: Any) -> DataFrame
+ python/pyspark/pandas/frame.py:3205: note: def apply(self, f: _collections_abc.Callable[[VarArg(Any), KwArg(Any)], Series[Any]], raw: bool = ..., args: Any = ..., *, axis: Literal['columns', 1], result_type: Literal['reduce'], **kwargs: Any) -> DataFrame
- python/pyspark/pandas/series.py:7366: note: def to_string(self, buf: str | PathLike[str] | WriteBuffer[str], na_rep: str = ..., float_format: str | Callable[[float], str] | EngFormatter = ..., header: bool = ..., index: bool = ..., length: bool = ..., dtype: bool = ..., name: bool = ..., max_rows: int = ..., min_rows: int = ...) -> None
+ python/pyspark/pandas/series.py:7366: note: def to_string(self, buf: str | PathLike[str] | WriteBuffer[str], na_rep: str = ..., float_format: str | _collections_abc.Callable[[float], str] | EngFormatter = ..., header: bool = ..., index: bool = ..., length: bool = ..., dtype: bool = ..., name: bool = ..., max_rows: int = ..., min_rows: int = ...) -> None
- python/pyspark/pandas/series.py:7366: note: def to_string(self, buf: None = ..., na_rep: str = ..., float_format: str | Callable[[float], str] | EngFormatter = ..., header: bool = ..., index: bool = ..., length: bool = ..., dtype: bool = ..., name: bool = ..., max_rows: int = ..., min_rows: int = ...) -> str
+ python/pyspark/pandas/series.py:7366: note: def to_string(self, buf: None = ..., na_rep: str = ..., float_format: str | _collections_abc.Callable[[float], str] | EngFormatter = ..., header: bool = ..., index: bool = ..., length: bool = ..., dtype: bool = ..., name: bool = ..., max_rows: int = ..., min_rows: int = ...) -> str
- python/pyspark/pandas/namespace.py:1140: note: def [IntStrT: (int, str)] read_excel(io: str | PathLike[str] | ReadBuffer[bytes] | ExcelFile | Workbook | Book | Any | Any, sheet_name: list[IntStrT], *, header: int | Sequence[int] | None = ..., names: MutableSequence[Any] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Any, ...] | range | None = ..., index_col: int | Sequence[int] | str | None = ..., usecols: str | SequenceNotStr[Hashable] | range | ExtensionArray | ndarray[tuple[Any, ...], dtype[Any]] | Index[Any] | Series[Any] | Callable[[Any], bool] | None = ..., dtype: str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object] | Mapping[str, str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object]] | None = ..., engine: Literal['xlrd', 'openpyxl', 'odf', 'pyxlsb', 'calamine'] | None = ..., converters: Mapping[int | str, Callable[[object], object]] | None = ..., true_values: Iterable[Hashable] | None = ..., false_values: Iterable[Hashable] | None = ..., skiprows: int | Sequence[int] | Callable[[object], bool] | None = ..., nrows: int | None = ..., na_values: Sequence[str] | dict[str | int, Sequence[str]] | None = ..., keep_default_na: bool = ..., na_filter: bool = ..., verbose: bool = ..., parse_dates: bool | Sequence[int] | Sequence[Sequence[str] | Sequence[int]] | dict[str, Sequence[int] | list[str]] = ..., date_format: dict[Hashable, str] | str | None = ..., thousands: str | None = ..., decimal: str = ..., comment: str | None = ..., skipfooter: int = ..., storage_options: dict[str, Any] | None = ..., dtype_backend: Literal['pyarrow', 'numpy_nullable'] | Literal[_NoDefault.no_default] = ..., engine_kwargs: dict[str, Any] | None = ...) -> dict[IntStrT, DataFrame]
+ python/pyspark/pandas/namespace.py:1140: note: def [IntStrT: (int, str)] read_excel(io: str | PathLike[str] | ReadBuffer[bytes] | ExcelFile | Workbook | Book | Any | Any, sheet_name: list[IntStrT], *, header: int | Sequence[int] | None = ..., names: MutableSequence[Any] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Any, ...] | range | None = ..., index_col: int | Sequence[int] | str | None = ..., usecols: str | SequenceNotStr[Hashable] | range | ExtensionArray | ndarray[tuple[Any, ...], dtype[Any]] | Index[Any] | Series[Any] | _collections_abc.Callable[[Any], bool] | None = ..., dtype: str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object] | Mapping[str, str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object]] | None = ..., engine: Literal['xlrd', 'openpyxl', 'odf', 'pyxlsb', 'calamine'] | None = ..., converters: Mapping[int | str, _collections_abc.Callable[[object], object]] | None = ..., true_values: Iterable[Hashable] | None = ..., false_values: Iterable[Hashable] | None = ..., skiprows: int | Sequence[int] | _collections_abc.Callable[[object], bool] | None = ..., nrows: int | None = ..., na_values: Sequence[str] | dict[str | int, Sequence[str]] | None = ..., keep_default_na: bool = ..., na_filter: bool = ..., verbose: bool = ..., parse_dates: bool | Sequence[int] | Sequence[Sequence[str] | Sequence[int]] | dict[str, Sequence[int] | list[str]] = ..., date_format: dict[Hashable, str] | str | None = ..., thousands: str | None = ..., decimal: str = ..., comment: str | None = ..., skipfooter: int = ..., storage_options: dict[str, Any] | None = ..., dtype_backend: Literal['pyarrow', 'numpy_nullable'] | Literal[_NoDefault.no_default] = ..., engine_kwargs: dict[str, Any] | None = ...) -> dict[IntStrT, DataFrame]
- python/pyspark/pandas/namespace.py:1140: note: def read_excel(io: str | PathLike[str] | ReadBuffer[bytes] | ExcelFile | Workbook | Book | Any | Any, sheet_name: None, *, header: int | Sequence[int] | None = ..., names: MutableSequence[Any] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Any, ...] | range | None = ..., index_col: int | Sequence[int] | str | None = ..., usecols: str | SequenceNotStr[Hashable] | range | ExtensionArray | ndarray[tuple[Any, ...], dtype[Any]] | Index[Any] | Series[Any] | Callable[[Any], bool] | None = ..., dtype: str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object] | Mapping[str, str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object]] | None = ..., engine: Literal['xlrd', 'openpyxl', 'odf', 'pyxlsb', 'calamine'] | None = ..., converters: Mapping[int | str, Callable[[object], object]] | None = ..., true_values: Iterable[Hashable] | None = ..., false_values: Iterable[Hashable] | None = ..., skiprows: int | Sequence[int] | Callable[[object], bool] | None = ..., nrows: int | None = ..., na_values: Sequence[str] | dict[str | int, Sequence[str]] | None = ..., keep_default_na: bool = ..., na_filter: bool = ..., verbose: bool = ..., parse_dates: bool | Sequence[int] | Sequence[Sequence[str] | Sequence[int]] | dict[str, Sequence[int] | list[str]] = ..., date_format: dict[Hashable, str] | str | None = ..., thousands: str | None = ..., decimal: str = ..., comment: str | None = ..., skipfooter: int = ..., storage_options: dict[str, Any] | None = ..., dtype_backend: Literal['pyarrow', 'numpy_nullable'] | Literal[_NoDefault.no_default] = ..., engine_kwargs: dict[str, Any] | None = ...) -> dict[str, DataFrame]
+ python/pyspark/pandas/namespace.py:1140: note: def read_excel(io: str | PathLike[str] | ReadBuffer[bytes] | ExcelFile | Workbook | Book | Any | Any, sheet_name: None, *, header: int | Sequence[int] | None = ..., names: MutableSequence[Any] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Any, ...] | range | None = ..., index_col: int | Sequence[int] | str | None = ..., usecols: str | SequenceNotStr[Hashable] | range | ExtensionArray | ndarray[tuple[Any, ...], dtype[Any]] | Index[Any] | Series[Any] | _collections_abc.Callable[[Any], bool] | None = ..., dtype: str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object] | Mapping[str, str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object]] | None = ..., engine: Literal['xlrd', 'openpyxl', 'odf', 'pyxlsb', 'calamine'] | None = ..., converters: Mapping[int | str, _collections_abc.Callable[[object], object]] | None = ..., true_values: Iterable[Hashable] | None = ..., false_values: Iterable[Hashable] | None = ..., skiprows: int | Sequence[int] | _collections_abc.Callable[[object], bool] | None = ..., nrows: int | None = ..., na_values: Sequence[str] | dict[str | int, Sequence[str]] | None = ..., keep_default_na: bool = ..., na_filter: bool = ..., verbose: bool = ..., parse_dates: bool | Sequence[int] | Sequence[Sequence[str] | Sequence[int]] | dict[str, Sequence[int] | list[str]] = ..., date_format: dict[Hashable, str] | str | None = ..., thousands: str | None = ..., decimal: str = ..., comment: str | None = ..., skipfooter: int = ..., storage_options: dict[str, Any] | None = ..., dtype_backend: Literal['pyarrow', 'numpy_nullable'] | Literal[_NoDefault.no_default] = ..., engine_kwargs: dict[str, Any] | None = ...) -> dict[str, DataFrame]
- python/pyspark/pandas/namespace.py:1140: note: def read_excel(io: str | PathLike[str] | ReadBuffer[bytes] | ExcelFile | Workbook | Book | Any | Any, sheet_name: list[int | str], *, header: int | Sequence[int] | None = ..., names: MutableSequence[Any] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Any, ...] | range | None = ..., index_col: int | Sequence[int] | str | None = ..., usecols: str | SequenceNotStr[Hashable] | range | ExtensionArray | ndarray[tuple[Any, ...], dtype[Any]] | Index[Any] | Series[Any] | Callable[[Any], bool] | None = ..., dtype: str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object] | Mapping[str, str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object]] | None = ..., engine: Literal['xlrd', 'openpyxl', 'odf', 'pyxlsb', 'calamine'] | None = ..., converters: Mapping[int | str, Callable[[object], object]] | None = ..., true_values: Iterable[Hashable] | None = ..., false_values: Iterable[Hashable] | None = ..., skiprows: int | Sequence[int] | Callable[[object], bool] | None = ..., nrows: int | None = ..., na_values: Sequence[str] | dict[str | int, Sequence[str]] | None = ..., keep_default_na: bool = ..., na_filter: bool = ..., verbose: bool = ..., parse_dates: bool | Sequence[int] | Sequence[Sequence[str] | Sequence[int]] | dict[str, Sequence[int] | list[str]] = ..., date_format: dict[Hashable, str] | str | None = ..., thousands: str | None = ..., decimal: str = ..., comment: str | None = ..., skipfooter: int = ..., storage_options: dict[str, Any] | None = ..., dtype_backend: Literal['pyarrow', 'numpy_nullable'] | Literal[_NoDefault.no_default] = ..., engine_kwargs: dict[str, Any] | None = ...) -> dict[int | str, DataFrame]
+ python/pyspark/pandas/namespace.py:1140: note: def read_excel(io: str | PathLike[str] | ReadBuffer[bytes] | ExcelFile | Workbook | Book | Any | Any, sheet_name: list[int | str], *, header: int | Sequence[int] | None = ..., names: MutableSequence[Any] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Any, ...] | range | None = ..., index_col: int | Sequence[int] | str | None = ..., usecols: str | SequenceNotStr[Hashable] | range | ExtensionArray | ndarray[tuple[Any, ...], dtype[Any]] | Index[Any] | Series[Any] | _collections_abc.Callable[[Any], bool] | None = ..., dtype: str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object] | Mapping[str, str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object]] | None = ..., engine: Literal['xlrd', 'openpyxl', 'odf', 'pyxlsb', 'calamine'] | None = ..., converters: Mapping[int | str, _collections_abc.Callable[[object], object]] | None = ..., true_values: Iterable[Hashable] | None = ..., false_values: Iterable[Hashable] | None = ..., skiprows: int | Sequence[int] | _collections_abc.Callable[[object], bool] | None = ..., nrows: int | None = ..., na_values: Sequence[str] | dict[str | int, Sequence[str]] | None = ..., keep_default_na: bool = ..., na_filter: bool = ..., verbose: bool = ..., parse_dates: bool | Sequence[int] | Sequence[Sequence[str] | Sequence[int]] | dict[str, Sequence[int] | list[str]] = ..., date_format: dict[Hashable, str] | str | None = ..., thousands: str | None = ..., decimal: str = ..., comment: str | None = ..., skipfooter: int = ..., storage_options: dict[str, Any] | None = ..., dtype_backend: Literal['pyarrow', 'numpy_nullable'] | Literal[_NoDefault.no_default] = ..., engine_kwargs: dict[str, Any] | None = ...) -> dict[int | str, DataFrame]
- python/pyspark/pandas/namespace.py:1140: note: def read_excel(io: str | PathLike[str] | ReadBuffer[bytes] | ExcelFile | Workbook | Book | Any | Any, sheet_name: int | str = ..., *, header: int | Sequence[int] | None = ..., names: MutableSequence[Any] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Any, ...] | range | None = ..., index_col: int | Sequence[int] | str | None = ..., usecols: str | SequenceNotStr[Hashable] | range | ExtensionArray | ndarray[tuple[Any, ...], dtype[Any]] | Index[Any] | Series[Any] | Callable[[Any], bool] | None = ..., dtype: str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object] | Mapping[str, str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object]] | None = ..., engine: Literal['xlrd', 'openpyxl', 'odf', 'pyxlsb', 'calamine'] | None = ..., converters: Mapping[int | str, Callable[[object], object]] | None = ..., true_values: Iterable[Hashable] | None = ..., false_values: Iterable[Hashable] | None = ..., skiprows: int | Sequence[int] | Callable[[object], bool] | None = ..., nrows: int | None = ..., na_values: Sequence[str] | dict[str | int, Sequence[str]] | None = ..., keep_default_na: bool = ..., na_filter: bool = ..., verbose: bool = ..., parse_dates: bool | Sequence[int] | Sequence[Sequence[str] | Sequence[int]] | dict[str, Sequence[int] | list[str]] = ..., date_format: dict[Hashable, str] | str | None = ..., thousands: str | None = ..., decimal: str = ..., comment: str | None = ..., skipfooter: int = ..., storage_options: dict[str, Any] | None = ..., dtype_backend: Literal['pyarrow', 'numpy_nullable'] | Literal[_NoDefault.no_default] = ..., engine_kwargs: dict[str, Any] | None = ...) -> DataFrame
+ python/pyspark/pandas/namespace.py:1140: note: def read_excel(io: str | PathLike[str] | ReadBuffer[bytes] | ExcelFile | Workbook | Book | Any | Any, sheet_name: int | str = ..., *, header: int | Sequence[int] | None = ..., names: MutableSequence[Any] | ndarray[tuple[Any, ...], dtype[Any]] | tuple[Any, ...] | range | None = ..., index_col: int | Sequence[int] | str | None = ..., usecols: str | SequenceNotStr[Hashable] | range | ExtensionArray | ndarray[tuple[Any, ...], dtype[Any]] | Index[Any] | Series[Any] | _collections_abc.Callable[[Any], bool] | None = ..., dtype: str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object] | Mapping[str, str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object]] | None = ..., engine: Literal['xlrd', 'openpyxl', 'odf', 'pyxlsb', 'calamine'] | None = ..., converters: Mapping[int | str, _collections_abc.Callable[[object], object]] | None = ..., true_values: Iterable[Hashable] | None = ..., false_values: Iterable[Hashable] | None = ..., skiprows: int | Sequence[int] | _collections_abc.Callable[[object], bool] | None = ..., nrows: int | None = ..., na_values: Sequence[str] | dict[str | int, Sequence[str]] | None = ..., keep_default_na: bool = ..., na_filter: bool = ..., verbose: bool = ..., parse_dates: bool | Sequence[int] | Sequence[Sequence[str] | Sequence[int]] | dict[str, Sequence[int] | list[str]] = ..., date_format: dict[Hashable, str] | str | None = ..., thousands: str | None = ..., decimal: str = ..., comment: str | None = ..., skipfooter: int = ..., storage_options: dict[str, Any] | None = ..., dtype_backend: Literal['pyarrow', 'numpy_nullable'] | Literal[_NoDefault.no_default] = ..., engine_kwargs: dict[str, Any] | None = ...) -> DataFrame
+ python/pyspark/pandas/supported_api_gen.py:148: error: Need type annotation for "m" [var-annotated]
+ python/pyspark/pandas/supported_api_gen.py:207: error: Need type annotation for "m" [var-annotated]
+ python/pyspark/pandas/supported_api_gen.py:211: error: Need type annotation for "m" [var-annotated]
+ python/pyspark/ml/classification.py:3972: error: Unused "type: ignore" comment [unused-ignore]
freqtrade (https://github.com/freqtrade/freqtrade)
+ freqtrade/exchange/common.py:163: error: Type variable "freqtrade.exchange.common.F" is unbound [valid-type]
+ freqtrade/exchange/common.py:163: note: (Hint: Use "Generic[F]" or "Protocol[F]" base class to bind "F" inside a class)
+ freqtrade/exchange/common.py:163: note: (Hint: Use "F" in function signature to bind "F" inside a function)
- freqtrade/data/converter/converter.py:108: error: Argument 1 has incompatible type "dict[str, str]"; expected "Callable[[DataFrame], str | bytes | date | datetime | timedelta | <7 more items> | complex | integer[Any] | floating[Any] | complexfloating[Any, Any]] | Callable[[DataFrame], Series[Any]] | Callable[[DataFrame], DataFrame] | ufunc | str | list[Callable[[DataFrame], str | bytes | date | datetime | timedelta | <7 more items> | complex | integer[Any] | floating[Any] | complexfloating[Any, Any]] | Callable[[DataFrame], Series[Any]] | Callable[[DataFrame], DataFrame] | ufunc | str] | Mapping[Hashable, Callable[[DataFrame], str | bytes | date | datetime | timedelta | <7 more items> | complex | integer[Any] | floating[Any] | complexfloating[Any, Any]] | Callable[[DataFrame], Series[Any]] | Callable[[DataFrame], DataFrame] | ufunc | str | list[Callable[[DataFrame], str | bytes | date | datetime | timedelta | <7 more items> | complex | integer[Any] | floating[Any] | complexfloating[Any, Any]] | Callable[[DataFrame], Series[Any]] | Callable[[DataFrame], DataFrame] | ufunc | str]] | None" [arg-type]
+ freqtrade/data/converter/converter.py:108: error: Argument 1 has incompatible type "dict[str, str]"; expected "_collections_abc.Callable[[DataFrame], str | bytes | date | datetime | timedelta | <7 more items> | complex | integer[Any] | floating[Any] | complexfloating[Any, Any]] | _collections_abc.Callable[[DataFrame], Series[Any]] | _collections_abc.Callable[[DataFrame], DataFrame] | ufunc | str | list[_collections_abc.Callable[[DataFrame], str | bytes | date | datetime | timedelta | <7 more items> | complex | integer[Any] | floating[Any] | complexfloating[Any, Any]] | _collections_abc.Callable[[DataFrame], Series[Any]] | _collections_abc.Callable[[DataFrame], DataFrame] | ufunc | str] | Mapping[Hashable, _collections_abc.Callable[[DataFrame], str | bytes | date | datetime | timedelta | <7 more items> | complex | integer[Any] | floating[Any] | complexfloating[Any, Any]] | _collections_abc.Callable[[DataFrame], Series[Any]] | _collections_abc.Callable[[DataFrame], DataFrame] | ufunc | str | list[_collections_abc.Callable[[DataFrame], str | bytes | date | datetime | timedelta | <7 more items> | complex | integer[Any] | floating[Any] | complexfloating[Any, Any]] | _collections_abc.Callable[[DataFrame], Series[Any]] | _collections_abc.Callable[[DataFrame], DataFrame] | ufunc | str]] | None" [arg-type]
+ freqtrade/data/converter/converter.py:108: note: "dict" is missing following "Callable" protocol member:
+ freqtrade/data/converter/converter.py:108: note: __call__
+ freqtrade/data/converter/converter.py:108: note: "dict" is missing following "Callable" protocol member:
+ freqtrade/data/converter/converter.py:108: note: __call__
+ freqtrade/data/converter/converter.py:108: note: "dict" is missing following "Callable" protocol member:
+ freqtrade/data/converter/converter.py:108: note: __call__
+ freqtrade/exchange/exchange_ws.py:211: error: F? not callable [misc]
+ freqtrade/exchange/exchange.py:1601: error: F? not callable [misc]
+ freqtrade/exchange/exchange.py:1615: error: F? not callable [misc]
+ freqtrade/exchange/exchange.py:1681: error: F? not callable [misc]
+ freqtrade/exchange/exchange.py:1821: error: F? not callable [misc]
+ freqtrade/exchange/exchange.py:1829: error: F? not callable [misc]
+ freqtrade/exchange/exchange.py:2504: error: F? not callable [misc]
+ freqtrade/exchange/okx.py:207: error: F? not callable [misc]
+ freqtrade/exchange/hyperliquid.py:196: error: F? not callable [misc]
+ freqtrade/exchange/gate.py:144: error: F? not callable [misc]
+ freqtrade/exchange/gate.py:150: error: F? not callable [misc]
+ freqtrade/exchange/bybit.py:271: error: F? not callable [misc]
+ freqtrade/data/entryexitanalysis.py:58: note: "tuple" is missing following "Callable" protocol member:
+ freqtrade/data/entryexitanalysis.py:58: note: __call__
+ freqtrade/rpc/rpc.py:947: error: F? not callable [misc]
+ freqtrade/rpc/rpc.py:1133: error: F? not callable [misc]
+ freqtrade/rpc/telegram.py:128: error: "_collections_abc.Callable[[VarArg(Any), KwArg(Any)], Coroutine[Any, Any, None]]" has no attribute "__name__" [attr-defined]
+ freqtrade/plot/plotting.py:187: note: "tuple" is missing following "Callable" protocol member:
+ freqtrade/plot/plotting.py:187: note: __call__
+ freqtrade/plot/plotting.py:188: note: "tuple" is missing following "Callable" protocol member:
+ freqtrade/plot/plotting.py:188: note: __call__
+ freqtrade/freqtradebot.py:1402: error: F? not callable [misc]
+ freqtrade/freqtradebot.py:1576: error: F? not callable [misc]
+ freqtrade/freqtradebot.py:1728: error: F? not callable [misc]
+ freqtrade/freqtradebot.py:1779: error: F? not callable [misc]
+ freqtrade/freqtradebot.py:1877: error: F? not callable [misc]
+ freqtrade/worker.py:182: error: "_collections_abc.Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "__name__" [attr-defined]
- freqtrade/templates/FreqaiExampleStrategy.py:244: error: No overload variant of "__setitem__" of "_LocIndexerFrame" matches argument types "tuple[Series[bool], list[str]]", "tuple[int, str]" [call-overload]
+ freqtrade/templates/FreqaiExampleStrategy.py:244: error: No overload variant of "__setitem__" of "_LocIndexerFrame" matches argument types "tuple[Any, list[str]]", "tuple[int, str]" [call-overload]
- freqtrade/templates/FreqaiExampleStrategy.py:245: error: Unsupported left operand type for & ("tuple[Index[Any] | Series[builtins.bool] | ndarray[tuple[Any, ...], dtype[numpy.bool[builtins.bool]]] | list[builtins.bool] | str | bytes | date | datetime | timedelta | <7 more items> | complex | integer[Any] | floating[Any] | complexfloating[Any, Any] | list[Any] | slice[Any, Any, Any] | tuple[str | bytes | date | datetime | timedelta | <7 more items> | complex | integer[Any] | floating[Any] | complexfloating[Any, Any], ...], ...]") [operator]
- freqtrade/templates/FreqaiExampleStrategy.py:254: error: No overload variant of "__setitem__" of "_LocIndexerFrame" matches argument types "tuple[Series[bool], list[str]]", "tuple[int, str]" [call-overload]
+ freqtrade/templates/FreqaiExampleStrategy.py:254: error: No overload variant of "__setitem__" of "_LocIndexerFrame" matches argument types "tuple[Any, list[str]]", "tuple[int, str]" [call-overload]
- freqtrade/templates/FreqaiExampleStrategy.py:255: error: Unsupported left operand type for & ("tuple[Index[Any] | Series[builtins.bool] | ndarray[tuple[Any, ...], dtype[numpy.bool[builtins.bool]]] | list[builtins.bool] | str | bytes | date | datetime | timedelta | <7 more items> | complex | integer[Any] | floating[Any] | complexfloating[Any, Any] | list[Any] | slice[Any, Any, Any] | tuple[str | bytes | date | datetime | timedelta | <7 more items> | complex | integer[Any] | floating[Any] | complexfloating[Any, Any], ...], ...]") [operator]
+ freqtrade/plugins/protections/cooldown_period.py:43: error: Unused "type: ignore" comment [unused-ignore]
+ freqtrade/plugins/pairlist/PercentChangePairList.py:237: error: Unused "type: ignore" comment [unused-ignore]
manticore (https://github.com/trailofbits/manticore)
+ tests/auto_generators/make_dump.py:228: error: Need type annotation for "groups" [var-annotated]
+ manticore/utils/log.py:43: error: Need type annotation for "colors" [var-annotated]
+ manticore/utils/helpers.py:40: error: Need type annotation for "c" [var-annotated]
+ manticore/platforms/linux_syscall_stubs.py:1177: error: Need type annotation for "x" [var-annotated]
+ manticore/core/smtlib/solver.py:297: error: Need type annotation for "lparen" [var-annotated]
+ manticore/core/smtlib/solver.py:297: error: Need type annotation for "rparen" [var-annotated]
+ manticore/native/cpu/abstractcpu.py:366: error: Need type annotation for "argument_iter" [var-annotated]
+ manticore/platforms/linux.py:3731: error: Need type annotation for "obj" [var-annotated]
mypy (https://github.com/python/mypy)
+ mypyc/ir/pprint.py:451: error: Need type annotation for "source_to_error" [var-annotated]
+ mypyc/ir/pprint.py:451: note: See https://mypy.rtfd.io/en/stable/_refs.html#code-var-annotated for more info
+ mypy/modulefinder.py:41: error: Need type annotation for "python_path" [var-annotated]
+ mypy/modulefinder.py:43: error: Need type annotation for "mypy_path" [var-annotated]
+ mypy/modulefinder.py:45: error: Need type annotation for "package_path" [var-annotated]
+ mypy/modulefinder.py:47: error: Need type annotation for "typeshed_path" [var-annotated]
+ mypy/modulefinder.py:713: error: "Never" object is not iterable [misc]
+ mypy/modulefinder.py:714: error: Cannot determine type of "gi_path" [has-type]
+ mypy/modulefinder.py:714: note: See https://mypy.rtfd.io/en/stable/_refs.html#code-has-type for more info
+ mypy/modulefinder.py:717: error: Cannot determine type of "gi_spec" [has-type]
+ mypy/modulefinder.py:720: error: Cannot determine type of "gi_path" [has-type]
+ mypy/modulefinder.py:731: error: Need type annotation for "parent_gitignores" (hint: "parent_gitignores: list[<type>] = ...") [var-annotated]
+ mypy/modulefinder.py:940: error: "Never" object is not iterable [misc]
+ mypy/modulefinder.py:942: error: Cannot determine type of "site_packages" [has-type]
+ mypy/modulefinder.py:960: error: Cannot determine type of "sys_path" [has-type]
+ mypy/modulefinder.py:960: error: Cannot determine type of "site_packages" [has-type]
+ mypy/find_sources.py:172: error: Left operand of "or" is always false [redundant-expr]
+ mypy/find_sources.py:172: note: See https://mypy.rtfd.io/en/stable/_refs.html#code-redundant-expr for more info
+ mypy/find_sources.py:172: error: Right operand of "or" is never evaluated [unreachable]
+ mypy/find_sources.py:172: note: See https://mypy.rtfd.io/en/stable/_refs.html#code-unreachable for more info
+ mypy/find_sources.py:213: error: Need type annotation for "result" [var-annotated]
+ mypy/plugins/attrs.py:1103: error: Need type annotation for "field_to_types" [var-annotated]
+ mypy/checkexpr.py:818: error: Need type annotation for "result" [var-annotated]
+ mypy/test/data.py:82: error: Implicit return in function which does not return [misc]
+ mypy/test/data.py:91: error: Implicit return in function which does not return [misc]
+ mypy/semanal_main.py:246: error: Need type annotation for "k" [var-annotated]
+ mypy/stubtest.py:380: error: Need type annotation for "symbol_table" [var-annotated]
+ mypy/stubtest.py:970: error: Need type annotation for "func" [var-annotated]
+ mypy/stubtest.py:988: error: Statement is unreachable [unreachable]
+ mypy/main.py:191: error: Need type annotation for "messages_by_file" [var-annotated]
+ mypy/main.py:1528: error: "Never" object is not iterable [misc]
+ mypy/main.py:1530: error: Cannot determine type of "sys_path" [has-type]
+ mypy/inspections.py:286: error: Need type annotation for "combined_attrs" [var-annotated]
optuna (https://github.com/optuna/optuna)
+ optuna/_experimental.py:55: error: ParamSpec "FP" is unbound [valid-type]
+ optuna/_experimental.py:55: error: Type variable "optuna._experimental.FT" is unbound [valid-type]
+ optuna/_experimental.py:55: note: (Hint: Use "Generic[FT]" or "Protocol[FT]" base class to bind "FT" inside a class)
+ optuna/_experimental.py:55: note: (Hint: Use "FT" in function signature to bind "FT" inside a function)
+ optuna/_experimental.py:74: error: "_collections_abc.Callable[FP, FT]" has no attribute "__qualname__" [attr-defined]
+ optuna/_experimental.py:95: error: Type variable "optuna._experimental.CT" is unbound [valid-type]
+ optuna/_experimental.py:95: note: (Hint: Use "Generic[CT]" or "Protocol[CT]" base class to bind "CT" inside a class)
+ optuna/_experimental.py:95: note: (Hint: Use "CT" in function signature to bind "CT" inside a function)
+ optuna/_deprecated.py:60: error: ParamSpec "FP" is unbound [valid-type]
+ optuna/_deprecated.py:60: error: Type variable "optuna._deprecated.FT" is unbound [valid-type]
+ optuna/_deprecated.py:60: note: (Hint: Use "Generic[FT]" or "Protocol[FT]" base class to bind "FT" inside a class)
+ optuna/_deprecated.py:60: note: (Hint: Use "FT" in function signature to bind "FT" inside a function)
+ optuna/_deprecated.py:107: error: "_collections_abc.Callable[FP, FT]" has no attribute "__name__" [attr-defined]
+ optuna/_deprecated.py:127: error: Type variable "optuna._deprecated.CT" is unbound [valid-type]
+ optuna/_deprecated.py:127: note: (Hint: Use "Generic[CT]" or "Protocol[CT]" base class to bind "CT" inside a class)
+ optuna/_deprecated.py:127: note: (Hint: Use "CT" in function signature to bind "CT" inside a function)
+ optuna/_convert_positional_args.py:55: error: ParamSpec "_P" is unbound [valid-type]
+ optuna/_convert_positional_args.py:55: error: Type variable "optuna._convert_positional_args._T" is unbound [valid-type]
+ optuna/_convert_positional_args.py:55: note: (Hint: Use "Generic[_T]" or "Protocol[_T]" base class to bind "_T" inside a class)
+ optuna/_convert_positional_args.py:55: note: (Hint: Use "_T" in function signature to bind "_T" inside a function)
+ optuna/_convert_positional_args.py:99: error: "_collections_abc.Callable[_P, _T]" has no attribute "__name__" [attr-defined]
+ optuna/_convert_positional_args.py:107: error: "_collections_abc.Callable[_P, _T]" has no attribute "__name__" [attr-defined]
+ optuna/_convert_positional_args.py:120: error: "_collections_abc.Callable[_P, _T]" has no attribute "__name__" [attr-defined]
+ optuna/_convert_positional_args.py:130: error: "_collections_abc.Callable[_P, _T]" has no attribute "__name__" [attr-defined]
+ tests/test_convert_positional_args.py:33: error: "_collections_abc.Callable[Any, _T?]" has no attribute "__name__" [attr-defined]
+ tests/test_convert_positional_args.py:39: error: Unused "type: ignore" comment [unused-ignore]
+ tests/test_convert_positional_args.py:60: error: Unused "type: ignore" comment [unused-ignore]
+ tests/test_convert_positional_args.py:61: error: Unused "type: ignore" comment [unused-ignore]
+ tests/test_convert_positional_args.py:97: error: _T? has no attribute "method" [attr-defined]
+ tests/test_convert_positional_args.py:137: error: Unused "type: ignore" comment [unused-ignore]
+ tests/test_convert_positional_args.py:141: error: Unused "type: ignore" comment [unused-ignore]
+ optuna/importance/_fanova/_tree.py:124: error: _T? has no attribute "__iter__" (not iterable) [attr-defined]
+ optuna/importance/_fanova/_tree.py:231: error: _T? has no attribute "__iter__" (not iterable) [attr-defined]
+ optuna/samplers/_tpe/_truncnorm.py:81: error: Unsupported operand type for unary - (_T?) [operator]
+ optuna/study/study.py:1563: error: _T? has no attribute "directions" [attr-defined]
+ optuna/study/study.py:1567: error: _T? has no attribute "_storage" [attr-defined]
+ optuna/study/study.py:1567: error: _T? has no attribute "_study_id" [attr-defined]
+ optuna/study/study.py:1568: error: _T? has no attribute "_storage" [attr-defined]
+ optuna/study/study.py:1568: error: _T? has no attribute "_study_id" [attr-defined]
+ optuna/study/study.py:1570: error: _T? has no attribute "user_attrs" [attr-defined]
+ optuna/study/study.py:1571: error: _T? has no attribute "set_user_attr" [attr-defined]
+ optuna/study/study.py:1574: error: _T? has no attribute "get_trials" [attr-defined]
+ optuna/study/study.py:1575: error: _T? has no attribute "directions" [attr-defined]
+ optuna/study/study.py:1578: error: _T? has no attribute "directions" [attr-defined]
+ optuna/study/study.py:1581: error: _T? has no attribute "_storage" [attr-defined]
+ optuna/study/study.py:1581: error: _T? has no attribute "_study_id" [attr-defined]
+ optuna/samplers/_tpe/sampler.py:289: error: Unsupported decorated constructor type [misc]
+ tests/test_multi_objective.py:45: error: _T? has no attribute "trials" [attr-defined]
+ tests/test_multi_objective.py:45: error: _T? has no attribute "directions" [attr-defined]
+ tests/test_multi_objective.py:47: error: _T? has no attribute "optimize" [attr-defined]
+ tests/test_multi_objective.py:48: error: _T? has no attribute "trials" [attr-defined]
+ tests/test_multi_objective.py:48: error: _T? has no attribute "directions" [attr-defined]
+ tests/test_multi_objective.py:50: error: _T? has no attribute "trials" [attr-defined]
+ tests/test_multi_objective.py:50: error: _T? has no attribute "directions" [attr-defined]
+ tests/test_multi_objective.py:52: error: _T? has no attribute "optimize" [attr-defined]
+ tests/test_multi_objective.py:53: error: _T? has no attribute "trials" [attr-defined]
+ tests/test_multi_objective.py:53: error: _T? has no attribute "directions" [attr-defined]
+ tests/test_multi_objective.py:54: error: _T? has no attribute "trials" [attr-defined]
+ tests/test_multi_objective.py:54: error: _T? has no attribute "directions" [attr-defined]
+ tests/test_multi_objective.py:76: error: _T? has no attribute "add_trials" [attr-defined]
+ tests/test_multi_objective.py:78: error: _T? has no attribute "trials" [attr-defined]
+ tests/test_multi_objective.py:79: error: _T? has no attribute "directions" [attr-defined]
+ tests/test_experimental.py:55: error: "_collections_abc.Callable[Any, FT?]" has no attribute "__name__" [attr-defined]
+ tests/test_experimental.py:68: error: "_collections_abc.Callable[Any, FT?]" has no attribute "__name__" [attr-defined]
+ tests/test_experimental.py:81: error: "_collections_abc.Callable[Any, FT?]" has no attribute "__name__" [attr-defined]
+ tests/test_experimental.py:85: error: Unused "type: ignore" comment [unused-ignore]
+ tests/test_experimental.py:94: error: CT? has no attribute "__name__" [attr-defined]
+ tests/test_experimental.py:95: error: CT? has no attribute "__init__" [attr-defined]
+ tests/test_experimental.py:96: error: CT? has no attribute "__doc__" [attr-defined]
+ tests/test_experimental.py:99: error: CT? not callable [misc]
+ tests/test_experimental.py:108: error: CT? not callable [misc]
+ tests/test_deprecated.py:56: error: "_collections_abc.Callable[Any, FT?]" has no attribute "__name__" [attr-defined]
+ tests/test_deprecated.py:72: error: "_collections_abc.Callable[Any, FT?]" has no attribute "__name__" [attr-defined]
+ tests/test_deprecated.py:76: error: Unused "type: ignore" comment [unused-ignore]
+ tests/test_deprecated.py:86: error: CT? has no attribute "__name__" [attr-defined]
+ tests/test_deprecated.py:87: error: CT? has no attribute "__init__" [attr-defined]
+ tests/test_deprecated.py:88: error: CT? has no attribute "__doc__" [attr-defined]
+ tests/test_deprecated.py:93: error: CT? not callable [misc]
+ tests/test_deprecated.py:102: error: CT? not callable [misc]
+ tests/test_deprecated.py:137: error: "_collections_abc.Callable[Any, FT?]" has no attribute "__name__" [attr-defined]
+ tests/test_deprecated.py:168: error: CT? has no attribute "__name__" [attr-defined]
+ tests/test_deprecated.py:169: error: CT? has no attribute "__doc__" [attr-defined]
+ tests/test_deprecated.py:172: error: CT? not callable [misc]
+ tests/test_deprecated.py:192: error: "_collections_abc.Callable[Any, FT?]" has no attribute "__name__" [attr-defined]
+ tests/trial_tests/test_trials.py:44: error: _T? has no attribute "enqueue_trial" [attr-defined]
+ tests/trial_tests/test_trials.py:45: error: _T? has no attribute "ask" [attr-defined]
+ tests/study_tests/test_study_summary.py:16: error: _T? has no attribute "_storage" [attr-defined]
+ tests/study_tests/test_study_summary.py:35: error: _T? has no attribute "_storage" [attr-defined]
+ tests/storages_tests/rdb_tests/create_db.py:90: error: _T? has no attribute "set_user_attr" [attr-defined]
+ tests/storages_tests/rdb_tests/create_db.py:91: error: _T? has no attribute "optimize" [attr-defined]
+ tests/storages_tests/rdb_tests/create_db.py:101: error: _T? has no attribute "set_user_attr" [attr-defined]
+ tests/storages_tests/rdb_tests/create_db.py:102: error: _T? has no attribute "optimize" [attr-defined]
+ tests/storages_tests/rdb_tests/create_db.py:109: error: _T? has no attribute "optimize" [attr-defined]
+ tests/samplers_tests/test_qmc.py:74: error: _T? has no attribute "optimize" [attr-defined]
+ tests/samplers_tests/test_qmc.py:108: error: _T? has no attribute "optimize" [attr-defined]
+ tests/samplers_tests/test_qmc.py:113: error: _T? has no attribute "optimize" [attr-defined]
+ tests/samplers_tests/test_qmc.py:118: error: _T? has no attribute "optimize" [attr-defined]
+ tests/samplers_tests/test_qmc.py:133: error: _T? has no attribute "optimize" [attr-defined]
+ tests/samplers_tests/test_qmc.py:139: error: _T? has no attribute "optimize" [attr-defined]
+ tests/samplers_tests/test_qmc.py:153: error: _T? has no attribute "optimize" [attr-defined]
+ tests/samplers_tests/test_qmc.py:159: error: _T? has no attribute "optimize" [attr-defined]
+ tests/samplers_tests/test_qmc.py:257: error: _T? has no attribute "optimize" [attr-defined]
+ tests/samplers_tests/test_qmc.py:258: error: _T? has no attribute "_storage" [attr-defined]
+ tests/samplers_tests/test_qmc.py:258: error: _T? has no attribute "_study_id" [attr-defined]
+ tests/samplers_tests/test_qmc.py:265: error: _T? has no attribute "optimize" [attr-defined]
+ tests/samplers_tests/test_qmc.py:266: error: _T? has no attribute "_storage" [attr-defined]
+ tests/samplers_tests/test_qmc.py:267: error: _T? has no attribute "_study_id" [attr-defined]
+ tests/samplers_tests/test_qmc.py:277: error: _T? has no attribute "optimize" [attr-defined]
+ tests/samplers_tests/test_qmc.py:278: error: _T? has no attribute "_storage" [attr-defined]
+ tests/samplers_tests/test_qmc.py:279: error: _T? has no attribute "_study_id" [attr-defined]
+ tests/samplers_tests/test_qmc.py:295: error: _T? has no attribute "optimize" [attr-defined]
+ tests/samplers_tests/test_partial_fixed.py:19: error: _T? has no attribute "sampler" [attr-defined]
+ tests/samplers_tests/test_partial_fixed.py:20: error: _T? has no attribute "optimize" [attr-defined]
+ tests/samplers_tests/test_partial_fixed.py:21: error: _T? has no attribute "trials" [attr-defined]
+ tests/samplers_tests/test_partial_fixed.py:27: error: _T? has no attribute "sampler" [attr-defined]
+ tests/samplers_tests/test_partial_fixed.py:30: error: _T? has no attribute "optimize" [attr-defined]
+ tests/samplers_tests/test_partial_fixed.py:32: error: _T? has no attribute "trials" [attr-defined]
+ tests/samplers_tests/test_partial_fixed.py:33: error: _T? has no attribute "trials" [attr-defined]
+ tests/samplers_tests/test_partial_fixed.py:51: error: _T? has no attribute "sampler" [attr-defined]
+ tests/samplers_tests/test_partial_fixed.py:52: error: _T? has no attribute "sampler" [attr-defined]
+ tests/samplers_tests/test_partial_fixed.py:57: error: _T? has no attribute "optimize" [attr-defined]
+ tests/samplers_tests/test_partial_fixed.py:58: error: _T? has no attribute "trials" [attr-defined]
+ tests/samplers_tests/test_partial_fixed.py:73: error: _T? has no attribute "sampler" [attr-defined]
+ tests/samplers_tests/test_partial_fixed.py:74: error: _T? has no attribute "sampler" [attr-defined]
+ tests/samplers_tests/test_partial_fixed.py:77: error: _T? has no attribute "optimize" [attr-defined]
+ tests/samplers_tests/test_partial_fixed.py:78: error: _T? has no attribute "trials" [attr-defined]
+ tests/samplers_tests/test_partial_fixed.py:94: error: _T? has no attribute "sampler" [attr-defined]
+ tests/samplers_tests/test_partial_fixed.py:95: error: _T? has no attribute "sampler" [attr-defined]
+ tests/samplers_tests/test_partial_fixed.py:98: error: _T? has no attribute "optimize" [attr-defined]
+ tests/samplers_tests/test_partial_fixed.py:104: error: _T? has no attribute "sampler" [attr-defined]
+ tests/samplers_tests/test_partial_fixed.py:114: error: _T? has no attribute "optimize" [attr-defined]
+
... (truncated 17682 lines) ...
I'm always in favor of bringing our stubs more in line with reality, so in general I'm in favor of the change. But unfortunately at least mypy seems heavily dependent on the current typing – as shown by the mypy primer output. We'd need to ensure that current type checkers will work with the updated stubs, before making this change.
Maybe it's also worth trying both changes (to typing and _collections_abc) in isolation to see if both break the world, or if you could at least do half of this PR.
We'd need to ensure that current type checkers will work with the updated stubs, before making this change.
this is a circular dependency. how can we ever change anything if the change needs to be compatible with our dependants
i will try them in isolation, i feel that the class with generics isn't going to be so well consumed, although i would be interested in pulling in the maintainers of other type checkers to see if they are welcomming to the change (fyi, i maintain pycharm)
According to mypy_primer, this change has no effect on the checked open source code. 🤖🎉
this is a circular dependency. how can we ever change anything if the change needs to be compatible with our dependants
In that case this needs coordination with the dependents, but we won't change this unilaterally.
Just changing typing.Callable looks more promising, indeed, but pyright would obviously needs to be changed as well (although I suspect the change is rather simple).
this is a circular dependency. how can we ever change anything if the change needs to be compatible with our dependants
That's why I said in https://github.com/astral-sh/ty/issues/1215#issuecomment-3317471670:
I think that would be likely to either break existing type checkers, or else have somewhat unexpected effects on their handling of
Callable. I'm not sure it's worth the disruption to the ecosystem.
It's not impossible to make a change like this but, as @srittau says, it is nontrivial. You'll first need to get consensus among all the major type checkers that the change is worth making (or at the very least, make sure that they're aware the change is coming, and make sure that they're willing to adapt to the change). Ideally we'd have PRs "ready to go" in the type checkers that need changes before the typeshed change is merged.
so, any advice on resolving these failing tests? should we summon the maintainer of pyright? (or basedpyright?)
Yes, @erictraut might be able to help.
This is a pretty disruptive change. Is there a specific problem that it solves? I'm not against making improvements if there's a real benefit to users or type checker authors, but it's not clear to me what those benefits are in this case.
As @AlexWaygood mentioned, if we want to model types.Callable as an _Alias, then I think there needs to be some other symbol that it aliases. It can't alias itself. Presumably, it aliases collections.abc.Callable.
But then how is collections.abc.Callable defined? If it's not defined as a _SpecialForm, then presumably it would be defined as a normal class definition? Defining it in that way will require some hard-coded logic in type checkers because Callable does not follow the normal rules for a normal class.
Is there a specific problem that it solves
primarily that Callable is actually a type, and has a __call__ attribute, and the flow-on consequences of that
from collections.abc import Callable
def f(t: type[object]): ...
f(Callable)
class C(Callable): ...
a: Callable
a.__call__
will require some hard-coded logic in type checkers because
Callabledoes not follow the normal rules for a normal class.
what exactly are the different rules to a normal class? i'm not aware of any, i would expect:
class Callable[**Parameters, Return](Protocol): # not actually a protocol, but acts like one, same as everything else in collections.abc
@abstractmethod
def __call__(self, *args: Parameters.args, **kwargs: Parameters.kwargs) -> Return: ...
how much effort would be involved in un-special casing Callable and just using this new defintion? i'd imagine there would need to be some logic to connect the alias to the definition, but the rest would just be removing of special-casing?
so i see two options:
- introduce a
_Callable[**P, R]intocollections.abc, this would be the definition for the alias - just fix the problem in full and make the definition as it should be, this would likely require more changes in type checkers than option 1
I'm still trying to understand if the proposed change is solving an actual problem that users are hitting or if this falls more under the category of "it would be nice if this were more aligned with the runtime"? If it's the latter, then I think we need to consider the impact and cost of the change and weigh it against the benefits.
how much effort would be involved in un-special casing Callable and just using this new defintion?
I can't speak for other type checkers, but for pyright it's not as simple as "un-special casing". Callable does not act like normal classes in a number of respects, so special-casing is still required. Notably, when used as a type form, it accepts a list expression for its first type argument. It also accepts ... (which has a special meaning) and the Concatenate special form. I'd need to investigate further to enumerate all of the special casing.
I'm still trying to understand if the proposed change is solving an actual problem that users are hitting
i have run into these issues when trying to write code
additionally, it is very confusing to see that the definition of Callable is Callable: _SpecialForm
Notably, when used as a type form, it accepts a list expression for its first type argument. It also accepts
...(which has a special meaning) and theConcatenatespecial form.
i don't think there is anything special about these semantics at all:
Code sample in basedpyright playground
from collections.abc import Callable
from typing import Concatenate
class CustomCa[**P, R]: ...
_ = Callable[..., int]
_ = CustomCa[..., int]
def f[**P](
a: Callable[[Concatenate[int, P]], None],
b: CustomCa[[Concatenate[int, P]], None],
): ...
the only special casing that i am aware of is the assumption that Callable has a definition of __get__ that is the same as FunctionType.__get__