testcontainers-dotnet icon indicating copy to clipboard operation
testcontainers-dotnet copied to clipboard

[Bug]: `TimeoutException` when sending many concurrent API requests to Podman

Open vlflorian opened this issue 10 months ago • 10 comments

Testcontainers version

4.4.0

Using the latest Testcontainers version?

Yes

Host OS

Windows

Host arch

AMD64

.NET version

8.0.401

Docker version

$ podman version
Client:       Podman Engine
Version:      5.4.2
API Version:  5.4.2
Go Version:   go1.24.2
Git Commit:   be85287fcf4590961614ee37be65eeb315e5d9ff
Built:        Wed Apr  2 18:33:14 2025
OS/Arch:      windows/amd64

Server:       Podman Engine
Version:      5.4.2
API Version:  5.4.2
Go Version:   go1.23.7
Git Commit:   be85287fcf4590961614ee37be65eeb315e5d9ff
Built:        Wed Apr  2 02:00:00 2025
OS/Arch:      linux/amd64

Docker info

$ podman info
Client:
  APIVersion: 5.4.2
  Built: 1743611594
  BuiltTime: Wed Apr  2 18:33:14 2025
  GitCommit: be85287fcf4590961614ee37be65eeb315e5d9ff
  GoVersion: go1.24.2
  Os: windows
  OsArch: windows/amd64
  Version: 5.4.2
host:
  arch: amd64
  buildahVersion: 1.39.4
  cgroupControllers:
  - cpuset
  - cpu
  - cpuacct
  - blkio
  - memory
  - devices
  - freezer
  - net_cls
  - perf_event
  - net_prio
  - hugetlb
  - pids
  - rdma
  - misc
  cgroupManager: cgroupfs
  cgroupVersion: v1
  conmon:
    package: conmon-2.1.13-1.fc41.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.13, commit: '
  cpuUtilization:
    idlePercent: 99.98
    systemPercent: 0.01
    userPercent: 0.01
  cpus: 24
  databaseBackend: sqlite
  distribution:
    distribution: fedora
    variant: container
    version: "41"
  eventLogger: journald
  freeLocks: 2043
  hostname: FRA-D-246
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 5.15.167.4-microsoft-standard-WSL2
  linkmode: dynamic
  logDriver: journald
  memFree: 14985383936
  memTotal: 16608931840
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.14.0-1.fc41.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.14.0
    package: netavark-1.14.1-1.fc41.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.14.1
  ociRuntime:
    name: crun
    package: crun-1.21-1.fc41.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.21
      commit: 10269840aa07fb7e6b7e1acff6198692d8ff5c88
      rundir: /run/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-0^20250415.g2340bbf-1.fc41.x86_64
    version: ""
  remoteSocket:
    exists: true
    path: unix:///run/podman/podman.sock
  rootlessNetworkCmd: pasta
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: true
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 4294967296
  swapTotal: 4294967296
  uptime: 102h 35m 36.00s (Approximately 4.25 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - docker.io
store:
  configFile: /usr/share/containers/storage.conf
  containerStore:
    number: 5
    paused: 0
    running: 0
    stopped: 5
  graphDriverName: overlay
  graphOptions:
    overlay.additionalImageStores:
    - /usr/lib/containers/storage
    overlay.imagestore: /usr/lib/containers/storage
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphRootAllocated: 1081101176832
  graphRootUsed: 1825034240
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "true"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 4
  runRoot: /run/containers/storage
  transientStore: false
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 5.4.2
  BuildOrigin: Fedora Project
  Built: 1743552000
  BuiltTime: Wed Apr  2 02:00:00 2025
  GitCommit: be85287fcf4590961614ee37be65eeb315e5d9ff
  GoVersion: go1.23.7
  Os: linux
  OsArch: linux/amd64
  Version: 5.4.2

What happened?

Hi!

I am not entirely sure if this is a bug or related to something else, but I wanted to follow up on my question here (where someone reported the same problem that we are encountering) so as to not derail the conversation.

Context: removed Docker Desktop, installed Podman, with Docker extension. We use TestContainers in our tests for spinning up Redis, Postgres, MinIO. The problem occurs with any of these containers. Both me and some colleagues are having the same issue.

Issue: TimeOutException, immediately after launching test with dotnet test. Stacktrace below.

Thank you for any help or insight anyone could give us!

Relevant log output

Failed Xyz.Tests.SomeTest [1 ms]
  Error Message:
   System.TimeoutException : The operation has timed out.
  Stack Trace:
     at System.IO.Pipes.NamedPipeClientStream.ConnectInternal(Int32 timeout, CancellationToken cancellationToken, Int32 startTime)
   at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(Thread threadPoolThread, ExecutionContext executionContext, ContextCallback callback, Object state)
--- End of stack trace from previous location ---
   at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(Thread threadPoolThread, ExecutionContext executionContext, ContextCallback callback, Object state)
   at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot, Thread threadPoolThread)
--- End of stack trace from previous location ---
   at Docker.DotNet.DockerClient.<>c__DisplayClass6_0.<<-ctor>b__0>d.MoveNext() in /_/src/Docker.DotNet/DockerClient.cs:line 80
--- End of stack trace from previous location ---
   at Microsoft.Net.Http.Client.ManagedHandler.ProcessRequestAsync(HttpRequestMessage request, CancellationToken cancellationToken) in /_/src/Docker.DotNet/Microsoft.Net.Http.Client/ManagedHandler.cs:line 164
   at Microsoft.Net.Http.Client.ManagedHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) in /_/src/Docker.DotNet/Microsoft.Net.Http.Client/ManagedHandler.cs:line 80
   at System.Net.Http.HttpClient.<SendAsync>g__Core|83_0(HttpRequestMessage request, HttpCompletionOption completionOption, CancellationTokenSource cts, Boolean disposeCts, CancellationTokenSource pendingRequestsCts, CancellationToken originalCancellationToken)
   at Docker.DotNet.DockerClient.PrivateMakeRequestAsync(TimeSpan timeout, HttpCompletionOption completionOption, HttpMethod method, String path, IQueryString queryString, IDictionary`2 headers, IRequestContent data, CancellationToken cancellationToken) in /_/src/Docker.DotNet/DockerClient.cs:line 423
   at Docker.DotNet.DockerClient.MakeRequestAsync[T](IEnumerable`1 errorHandlers, HttpMethod method, String path, IQueryString queryString, IRequestContent body, IDictionary`2 headers, TimeSpan timeout, CancellationToken token) in /_/src/Docker.DotNet/DockerClient.cs:line 254
   at Docker.DotNet.ContainerOperations.StartContainerAsync(String id, ContainerStartParameters parameters, CancellationToken cancellationToken) in /_/src/Docker.DotNet/Endpoints/ContainerOperations.cs:line 207        
   at DotNet.Testcontainers.Clients.TestcontainersClient.StartAsync(String id, CancellationToken ct) in /_/src/Testcontainers/Clients/TestcontainersClient.cs:line 129
   at DotNet.Testcontainers.Containers.DockerContainer.UnsafeStartAsync(CancellationToken ct) in /_/src/Testcontainers/Containers/DockerContainer.cs:line 505
   at DotNet.Testcontainers.Containers.DockerContainer.StartAsync(CancellationToken ct) in /_/src/Testcontainers/Containers/DockerContainer.cs:line 301
   at DotNet.Testcontainers.Containers.ResourceReaper.GetAndStartNewAsync(Guid sessionId, IDockerEndpointAuthenticationConfiguration dockerEndpointAuthConfig, IImage resourceReaperImage, IMount dockerSocket, ILogger logger, Boolean requiresPrivilegedMode, TimeSpan initTimeout, CancellationToken ct) in /_/src/Testcontainers/Containers/ResourceReaper.cs:line 224
   at DotNet.Testcontainers.Containers.ResourceReaper.GetAndStartNewAsync(Guid sessionId, IDockerEndpointAuthenticationConfiguration dockerEndpointAuthConfig, IImage resourceReaperImage, IMount dockerSocket, ILogger logger, Boolean requiresPrivilegedMode, TimeSpan initTimeout, CancellationToken ct) in /_/src/Testcontainers/Containers/ResourceReaper.cs:line 248
   at DotNet.Testcontainers.Containers.ResourceReaper.GetAndStartDefaultAsync(IDockerEndpointAuthenticationConfiguration dockerEndpointAuthConfig, ILogger logger, Boolean isWindowsEngineEnabled, CancellationToken ct) in /_/src/Testcontainers/Containers/ResourceReaper.cs:line 135
   at DotNet.Testcontainers.Clients.TestcontainersClient.RunAsync(IContainerConfiguration configuration, CancellationToken ct) in /_/src/Testcontainers/Clients/TestcontainersClient.cs:line 319
   at DotNet.Testcontainers.Containers.DockerContainer.UnsafeCreateAsync(CancellationToken ct) in /_/src/Testcontainers/Containers/DockerContainer.cs:line 454
   at DotNet.Testcontainers.Containers.DockerContainer.StartAsync(CancellationToken ct) in /_/src/Testcontainers/Containers/DockerContainer.cs:line 298
   at Xyz.Inf.TestUtils.DbFixtures.PostgresFixture.InitializeAsync()

Additional information

No response

vlflorian avatar Apr 29 '25 14:04 vlflorian

System.IO.Pipes.NamedPipeClientStream.ConnectInternal(Int32 timeout, CancellationToken cancellationToken, Int32 startTime)

It looks like Testcontainers can't connect to the named pipe (Podman Engine). What's the Podman endpoint in Settings > Resources? Could you also share the output of podman context ls? Are you able to run containers from the CLI?

HofmeisterAn avatar Apr 29 '25 14:04 HofmeisterAn

Hi @HofmeisterAn,

I greatly appreciate the swift response and help!

The Podman endpoints seems to be: npipe://\.\pipe\podman-machine-default

$  podman context ls
Name                         URI                                                          Identity                                                               Default     ReadWrite
podman-machine-default       ssh://[email protected]:53942/run/user/1000/podman/podman.sock  C:\Users\vl\.local\share\containers\podman\machine\machine  false       true
podman-machine-default-root  ssh://[email protected]:53942/run/podman/podman.sock            C:\Users\vl\.local\share\containers\podman\machine\machine  true        true

Running containers from the cli works fine.

vlflorian avatar Apr 29 '25 15:04 vlflorian

Are you sure everything was uninstalled correctly? No leftovers? Are there any environment variables or settings left in ~/.testcontainers.properties? Could you please share the Testcontainers logs?

The configuration you shared looks good and is similar to what I have on my test machine.

HofmeisterAn avatar Apr 29 '25 15:04 HofmeisterAn

I reinstalled podman and podman desktop to be sure, but to no effect. Weirdly enough, the tests work fine when running in Rider. Apparently it had a setting that set test parallelism to 1. When set to 4, I get the same TimeOutException as the one that occurs when running with dotnet test. (Both ways to test were running fine with Docker) I am clueless :-)

  • There is one env var that we have set: DOCKER_HOST=npipe://./pipe/podman-machine-default, could this be a problem?

  • ~/.testcontainers.properties contents:

docker.auth.config={...}
docker.host=npipe://./pipe/podman-machine-default
  • I do not know how to view the TestContainer logs and can't seem to find how to, sorry.

vlflorian avatar Apr 30 '25 09:04 vlflorian

The configuration settings DOCKER_HOST and docker.host shouldn't be necessary. Although they seem correct, could you please try removing them? Testcontainers should automatically detect the container runtime environment.

  • I do not know how to view the TestContainer logs and can't seem to find how to, sorry.

Try running the tests in debug in your IDE; the logs should appear in the console output of your IDE.

HofmeisterAn avatar Apr 30 '25 10:04 HofmeisterAn

Hi Andre, Same issue, I'm afraid. parallel 1 succeeds, 4 fails (or when running with dotnet test)

Only logs I could find:


info: MinIOFixture[0]
      Connected to Docker:
        Host: npipe://./pipe/podman-machine-default
        Server Version: 5.4.2
        Kernel Version: 5.15.167.4-microsoft-standard-WSL2
        API Version: 1.41
        Operating System: fedora
        Total Memory: 15.47 GB
info: PostgresFixture[0]
      Connected to Docker:
        Host: npipe://./pipe/podman-machine-default
        Server Version: 5.4.2
        Kernel Version: 5.15.167.4-microsoft-standard-WSL2
        API Version: 1.41
        Operating System: fedora
        Total Memory: 15.47 GB
info: RedisFixture[0]
      Connected to Docker:
        Host: npipe://./pipe/podman-machine-default
        Server Version: 5.4.2
        Kernel Version: 5.15.167.4-microsoft-standard-WSL2
        API Version: 1.41
        Operating System: fedora
        Total Memory: 15.47 GB
info: MinIOFixture[0]
      Docker container e73ca213adc5 created
info: MinIOFixture[0]
      Start Docker container e73ca213adc5
info: MinIOFixture[0]
      Wait for Docker container e73ca213adc5 to complete readiness checks
info: MinIOFixture[0]
      Docker container e73ca213adc5 ready
info: MinIOFixture[0]
      Docker container a62085b538a5 created
info: RedisFixture[0]
      Docker container f6b607951eee created
info: PostgresFixture[0]
      Docker container d6a250d2c72a created
info: RedisFixture[0]
      Start Docker container f6b607951eee
info: MinIOFixture[0]
      Start Docker container a62085b538a5
info: PostgresFixture[0]
      Start Docker container d6a250d2c72a
info: RedisFixture[0]
      Wait for Docker container f6b607951eee to complete readiness checks
info: MinIOFixture[0]
      Wait for Docker container a62085b538a5 to complete readiness checks
info: PostgresFixture[0]
      Wait for Docker container d6a250d2c72a to complete readiness checks
info: PostgresFixture[0]
      Execute "pg_isready --host localhost --dbname testdb --username testuser" at Docker container d6a250d2c72a
info: PostgresFixture[0]
      Docker container d6a250d2c72a ready

vlflorian avatar Apr 30 '25 11:04 vlflorian

The output looks pretty good. I'm just wondering why you're seeing the line Connected to Docker multiple times. Are you using different logger instances? That line should only be logged once per connection and logger instance (maybe you can test how it behaves when reusing the same logger instance).

I have no idea why you're running into this. It kind of looks like there are too many simultaneous requests, but I can't imagine that's the issue. You're using xUnit.net, right?

HofmeisterAn avatar Apr 30 '25 16:04 HofmeisterAn

We are using 3 TestContainers at once 🙈 we are using XUnit, yes! Something akin to too many simultaneous requests sounds like it could be the problem, since the errors are mostly disappearing when the tests are run with parallelism disabled.

vlflorian avatar May 02 '25 07:05 vlflorian

We are using 3 TestContainers at once

This shouldn't be a problem at all (unless you're hitting some resource limits). I've never run into an issue like that with Docker. Maybe try reaching out to the Podman team for some guidance. I'm still curious, though, why you're seeing the container runtime info more than once.

HofmeisterAn avatar May 02 '25 07:05 HofmeisterAn

I have the same timeout errors when running the test container simultaneously. I have 7 test files and requested 7 test containers due to parallelism.

I am using the Racher Desktop. And the CosmosDB image (Which takes ~2 minutes for container readiness check, which is really frustrating)

 Message: 
System.TimeoutException : The operation has timed out.

  Stack Trace: 
NamedPipeClientStream.ConnectInternal(Int32 timeout, CancellationToken cancellationToken, Int32 startTime)
ExecutionContext.RunFromThreadPoolDispatchLoop(Thread threadPoolThread, ExecutionContext executionContext, ContextCallback callback, Object state)
--- End of stack trace from previous location ---
ExecutionContext.RunFromThreadPoolDispatchLoop(Thread threadPoolThread, ExecutionContext executionContext, ContextCallback callback, Object state)
Task.ExecuteWithThreadLocal(Task& currentTaskSlot, Thread threadPoolThread)
--- End of stack trace from previous location ---
<<-ctor>b__0>d.MoveNext()
--- End of stack trace from previous location ---
ManagedHandler.ProcessRequestAsync(HttpRequestMessage request, CancellationToken cancellationToken)
ManagedHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
HttpClient.<SendAsync>g__Core|83_0(HttpRequestMessage request, HttpCompletionOption completionOption, CancellationTokenSource cts, Boolean disposeCts, CancellationTokenSource pendingRequestsCts, CancellationToken originalCancellationToken)
DockerClient.PrivateMakeRequestAsync(TimeSpan timeout, HttpCompletionOption completionOption, HttpMethod method, String path, IQueryString queryString, IDictionary`2 headers, IRequestContent data, CancellationToken cancellationToken)
DockerClient.MakeRequestAsync[T](IEnumerable`1 errorHandlers, HttpMethod method, String path, IQueryString queryString, IRequestContent body, IDictionary`2 headers, TimeSpan timeout, CancellationToken token)
ContainerOperations.InspectContainerAsync(String id, CancellationToken cancellationToken)
DockerContainerOperations.ByIdAsync(String id, CancellationToken ct)
DockerContainer.CheckReadinessAsync(WaitStrategy waitStrategy, CancellationToken ct)
<<WaitUntilAsync>g__UntilAsync|0>d.MoveNext()
--- End of stack trace from previous location ---
WaitStrategy.WaitUntilAsync(Func`1 wait, TimeSpan interval, TimeSpan timeout, Int32 retries, CancellationToken ct)
DockerContainer.CheckReadinessAsync(IEnumerable`1 waitStrategies, CancellationToken ct)
DockerContainer.UnsafeStartAsync(CancellationToken ct)
DockerContainer.StartAsync(CancellationToken ct)
IntegrationTestBase.SetCosmosDbTestContainerAsync() line 49
IntegrationTestBase.InitializeAsync() line 17
CreateRecordTests.InitializeAsync() line 18

Docker version

Client:
 Version:           27.5.1-rd
 API version:       1.45 (downgraded from 1.47)
 Go version:        go1.22.11
 Git commit:        0c97515
 Built:             Thu Jan 23 18:14:31 2025
 OS/Arch:           windows/amd64
 Context:           default

Server:
 Engine:
  Version:          26.1.5
  API version:      1.45 (minimum version 1.24)
  Go version:       go1.22.5
  Git commit:       411e817ddf710ff8e08fa193da80cb78af708191
  Built:            Fri Jul 26 17:51:06 2024
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.7.17
  GitCommit:        3a4de459a68952ffb703bbe7f2290861a75b6b67
 runc:
  Version:          1.1.14
  GitCommit:        2c9f5602f0ba3d9da1c2596322dfc4e156844890
 docker-init:
  Version:          0.19.0
  GitCommit:

Docker Info:

Client:
 Version:    27.5.1-rd
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.20.1
    Path:     C:\Program Files\Rancher Desktop\resources\resources\win32\docker-cli-plugins\docker-buildx.exe
  compose: Docker Compose (Docker Inc.)
    Version:  v2.33.0
    Path:     C:\Program Files\Rancher Desktop\resources\resources\win32\docker-cli-plugins\docker-compose.exe

Server:
 Containers: 29
  Running: 26
  Paused: 0
  Stopped: 3
 Images: 45
 Server Version: 26.1.5
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 3a4de459a68952ffb703bbe7f2290861a75b6b67
 runc version: 2c9f5602f0ba3d9da1c2596322dfc4e156844890
 init version:
 Security Options:
  seccomp
   Profile: builtin
 Kernel Version: 5.15.133.1-microsoft-standard-WSL2
 Operating System: Rancher Desktop WSL Distribution
 OSType: linux
 Architecture: x86_64
 CPUs: 16
 Total Memory: 15.46GiB
 Name: M-PC
 ID: fab94368-c28d-45c1-aa24-936e6fe98fa7
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No blkio throttle.read_bps_device support
WARNING: No blkio throttle.write_bps_device support
WARNING: No blkio throttle.read_iops_device support
WARNING: No blkio throttle.write_iops_device support

Additionally, I got the same error in Azure DevOps Pipelines (with ubuntu-22.04 image) as well.

mm-srtr avatar May 26 '25 15:05 mm-srtr

I tested this and it looks like a Docker.DotNet issue. Docker.DotNet creates new connections instead of reusing them, which we should fix later. For now, we can increase the default timeout and make it configurable (using the same approach we use for the custom configurations).

HofmeisterAn avatar Jun 24 '25 15:06 HofmeisterAn

I ran some local tests and encountered the same issue while using Podman Desktop on Windows. I created a PR that lets developers configure the named pipe connection timeout and also increases the default timeout. After making this change, I didn't run into any issues anymore.

As mentioned in the PR, we should also take a closer look at Docker.DotNet. I believe there are more improvements we can make there. I will close this issue with the PR and create a new one in the Docker.DotNet fork.

HofmeisterAn avatar Jul 08 '25 18:07 HofmeisterAn

Thank you, @HofmeisterAn !

vlflorian avatar Jul 09 '25 07:07 vlflorian

@HofmeisterAn, thanks for the investigation - just ran into this issue, great to see there is already some resolution!

Just wondering when we might see the fix in a package release?

jomonty avatar Aug 18 '25 08:08 jomonty

@jomonty As you can see here, we are open to getting help on the upstream change: https://github.com/testcontainers/Docker.DotNet/issues/30

Would you be interested to take a look?

kiview avatar Aug 18 '25 11:08 kiview

@jomonty As you can see here, we are open to getting help on the upstream change: testcontainers/Docker.DotNet#30

Would you be interested to take a look?

Hi @Kiview, I had seen the upstream issue - afraid between work and personal commitments I can't take it on in the near term, but will certainly take a look when I get a chance.

Apologies - I was referring to #1480, not strictly a fix for the issue but does provide a workaround sufficient for my use case just now (running tests locally and swapping to podman desktop from docker desktop). It was merged Jul 9th, but the last release for testcontainers-dotnet was 4.6 on Jun 13th. Do you know when the next release is likely to go out?

jomonty avatar Aug 18 '25 12:08 jomonty

Ah sorry, that makes sense @jomonty. @HofmeisterAn will generally decide when to cut the release, so really up to him.

kiview avatar Aug 18 '25 12:08 kiview

@jomonty I'd like to finish the Kafka PR. I'm waiting for feedback. After that I plan to publish a new version.

HofmeisterAn avatar Aug 18 '25 13:08 HofmeisterAn

@jomonty I'd like to finish the Kafka PR. I'm waiting for feedback. After that I plan to publish a new version.

Awesome, thank you!

jomonty avatar Aug 18 '25 15:08 jomonty