[Feature]: Split automatic trace files for parametrized tests
🚀 Feature Request
I would like to have ability to automatically save multiple trace files for parametrized tests.
Example
I have Option .setTrace(Options.Trace.RETAIN_ON_FAILURE) and parametrized test that fails. Separate trace files shall be created.
Motivation
I can have a parametrized test that fails for multiple parameters. Currently I'm not able to collect all failed cases but the last one.
Does startChunk do what you need? If not, can you explain what "I'm not able to collect all failed cases but the last one." means and what improvements to the Playwright API could help?
For example I have following parametrized test method. It's run in CI/CD pipeline with multiple ConfigRecord configRecord parameters. When more than one fails it produces only 1 trace file CustomMetadataTest.testSetCustomMetadata-chromium/trace.zip with only last failed method execution. In such situation I would like to have ability to have separate trace files per failed execution like it's presented in Run window in IntelliJ as shown on the screenshot.
Trace file could be automatically named with index CustomMetadataTest.testSetCustomMetadata-chromium/28_trace.zip, CustomMetadataTest.testSetCustomMetadata-chromium/trace_28.zip or custom name generator provided via method annotation.
public class CustomMetadataTest extends TestFixture {
// ...
@ParameterizedTest()
@MethodSource("getParameters")
@DisplayName("Update Custom Metadata records")
void testSetCustomMetadata(String apiName, ConfigRecord configRecord) {
TestUtils.assumeConfigRecord(configRecord);
Optional.ofNullable(
SOQLUtils.queryCustomMetadata(salesforcePage.getRequestContext(), apiName, configRecord.getLabel())
).ifPresentOrElse(jsonObject -> {
String recordId = SOQLUtils.getIdFromAttributes(jsonObject);
salesforcePage.navigateEditRecordId(recordId);
salesforcePage.updateRecordFields(configRecord);
salesforcePage.save();
}, () -> System.out.println("Custom Metadata " + apiName + " not found: " + configRecord.getLabel()));
}
}
Sounds, like startChunk/stopChunk provide the API that you need and you could have multiple traces written, one per test run if the tests share the page/context. Is there a reason that API is not sufficient?
When I want to save success executions then yes, It generates file with multiple traces. For case described in the ticket trace file is executed only wen test fails. From my experience (with startChunk/stopChunk api usage) trace file is not even generated, probably because test fails in the middle of execution and stopChunk was not called before trace file is saved.
@ParameterizedTest()
@MethodSource("getParameters")
@DisplayName("Update Custom Metadata records")
void testSetCustomMetadata(String apiName, ConfigRecord configRecord) {
TestUtils.assumeConfigRecord(configRecord);
Optional.ofNullable(
SOQLUtils.queryCustomMetadata(salesforcePage.getRequestContext(), apiName, configRecord.getLabel())
).ifPresentOrElse(jsonObject -> {
String recordId = SOQLUtils.getIdFromAttributes(jsonObject);
salesforcePage.getPage().context().tracing().startChunk(
new Tracing.StartChunkOptions().setName("Update Custom Metadata: " + apiName + " - " + configRecord.getLabel())
);
salesforcePage.navigateEditRecordId(recordId);
salesforcePage.updateRecordFields(configRecord); // <<- test fails here
salesforcePage.save();
salesforcePage.getPage().context().tracing().stopChunk();
}, () -> System.out.println("Custom Metadata " + apiName + " not found: " + configRecord.getLabel()));
}
From my experience (with
startChunk/stopChunkapi usage) trace file is not even generated, probably because test fails in the middle of execution andstopChunkwas not called before trace file is saved.
That would be a problem with the test framework you are using, stopChunk should work regardless of the state of the page. Maybe you can share minimal example where it is failing?
Test framework is JUnit 5, here's minimal working example. It generates single trace file with only 1 failed test case. Regardless of group/groupEnd or startChunk/stopChunk in test method.
Moving groupEnd/stopChunk to @AfterAll results in no trace file.
package com.example;
import com.microsoft.playwright.BrowserType;
import com.microsoft.playwright.Page;
import com.microsoft.playwright.Tracing;
import com.microsoft.playwright.junit.Options;
import com.microsoft.playwright.junit.OptionsFactory;
import com.microsoft.playwright.junit.UsePlaywright;
import com.microsoft.playwright.options.AriaRole;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.params.ParameterizedTest;
import org.junit.jupiter.params.provider.Arguments;
import org.junit.jupiter.params.provider.MethodSource;
import java.nio.file.Path;
import java.util.stream.Stream;
@UsePlaywright(ExampleTest.ExampleTestOptions.class)
public class ExampleTest {
public static class ExampleTestOptions implements OptionsFactory {
@Override
public Options getOptions() {
return new Options()
.setBrowserName("chromium")
.setLaunchOptions(
new BrowserType.LaunchOptions()
.setHeadless(false)
)
.setOutputDir(
Path.of("test-traces")
)
.setTrace(Options.Trace.RETAIN_ON_FAILURE);
}
}
static Stream<Arguments> getParameters() {
return Stream.of(
Arguments.of("https://www.wikipedia.org/", "invalid1", "search value 1"),
Arguments.of("https://www.wikipedia.org/", "searchInput", "JUnit"),
Arguments.of("https://www.wikipedia.org/", "invalid2", "search value 2")
);
}
Page testPage;
@BeforeEach
void setup(Page page) {
testPage = page;
}
@ParameterizedTest
@MethodSource("getParameters")
void exampleTest(String url, String locator, String value) {
testPage.context().tracing().startChunk(new Tracing.StartChunkOptions().setName(locator));
testPage.context().tracing().group(locator);
testPage.navigate(url);
testPage.locator(String.format("input[id='%s']", locator)).fill(value); // <-- failure expected here
testPage.getByRole(AriaRole.BUTTON, new Page.GetByRoleOptions().setName("Search")).click();
testPage.context().tracing().groupEnd();
testPage.context().tracing().stopChunk();
}
}
If you use Playwright fixtures, you don't need to call start/stopChunk() manually, it should just work. The problem is the name of the output dir currently includes <class name> + <method name> + <browser type>, so all parameterized runs of the test will end up writing to the same output dir and the last test will always win. We'd need different name for different parameters.
@uchagani do you want to have a look at this?
@yury-s sure I can take a look. Do you happen to know how trace file names are handled for parameterized tests in node? We can implement the same nomenclature.
There are now such thing as "parameterized test" there. Instead you need to generate separate tests with unique names. Those names are used as the output dir name (after some sanitization).
@yury-s I decided to work on this issue for now since the other one i was working on has workaround and this one is more of a blocker for people.
For parameterized tests, we need to come up with a naming convention. Junit doesn't have a good way to get at the parameter type or value. What they do have is a getDisplayName method that we can use but that comes with challenges because we will have to sanitize it to ensure there are no characters that will cause problems as dir names.
The simplest thing I can think of to do is to store and increment an integer for each invocation of the test.
so for parameterized tests the trace file would be stored in
test-results\SomeClass.someMethod-chromium-1\trace.zip
test-results\SomeClass.someMethod-chromium-2\trace.zip
test-results\SomeClass.someMethod-chromium-3\trace.zip
I'm open to other suggestions.
comes with challenges because we will have to sanitize it to ensure there are no characters that will cause problems as dir names.
We do sanitize strings for file paths upstream and also trim long paths by replacing the middle part of a string with its hash, so that it doesn't overflow Windows filesystem limits. We could do something similar here as well. My preference would be to have the parameter values included in the paths as it would improve readability of the output - one could easily understand what test variant produced the artifact. If there is no reliable way to get a nice human-readable representation of the parameters, then probably try to serialize them to json using GSon? There is a chance that the parameters are some complex/recursive objects, but for the common case I hope they are just primitive values. If the string representation grows too big we can always abbreviate it using the same hash approach. The counter as a suffix for the repetitive invocations of the same test that you suggest, I'd use as the ultimate fallback. This method is probably least friendly to the user, but is most reliable of all as it doesn't depend on the actual parameters. What do you think?
I'd suggest to store all traces of a test method in single directory with trace filename combined with JUnit execution index.
test-results\SomeClass.someMethod-chromium\{index}-trace.zip
test-results\SomeClass.someMethod-chromium\01-trace.zip
test-results\SomeClass.someMethod-chromium\02-trace.zip
test-results\SomeClass.someMethod-chromium\03-trace.zip
Latest update to jUnit introduced named Argument set which enables to provide custom Name for list of parameters and displays it in the test execution. In such case filename and directory path could look like this for example below:
test-results\com.ExampleTest.exampleTest-chromium/{index}-{ArgumentName}-trace.zip
test-results\com.ExampleTest.exampleTest-chromium/01-Arguments-1-trace.zip
test-results\com.ExampleTest.exampleTest-chromium/01-Arguments-2-trace.zip
test-results\com.ExampleTest.exampleTest-chromium/01-Arguments-3-trace.zip
package com.example;
import com.microsoft.playwright.BrowserType;
import com.microsoft.playwright.Page;
import com.microsoft.playwright.junit.Options;
import com.microsoft.playwright.junit.OptionsFactory;
import com.microsoft.playwright.junit.UsePlaywright;
import com.microsoft.playwright.options.AriaRole;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.params.ParameterizedTest;
import org.junit.jupiter.params.provider.Arguments;
import org.junit.jupiter.params.provider.MethodSource;
import java.nio.file.Path;
import java.util.stream.Stream;
@UsePlaywright(ExampleTest.ExampleTestOptions.class)
public class ExampleTest {
public static class ExampleTestOptions implements OptionsFactory {
@Override
public Options getOptions() {
return new Options()
.setBrowserName("chromium")
.setLaunchOptions(
new BrowserType.LaunchOptions()
.setHeadless(false)
)
.setOutputDir(
Path.of("test-traces")
)
.setTrace(Options.Trace.RETAIN_ON_FAILURE);
}
}
static Stream<Arguments> getParameters() {
return Stream.of(
Arguments.argumentSet("Arguments 1", "https://www.wikipedia.org/", "invalid1", "search value 1"),
Arguments.argumentSet("Arguments 2", "https://www.wikipedia.org/", "searchInput", "JUnit"),
Arguments.argumentSet("Arguments 3", "https://www.wikipedia.org/", "invalid2", "search value 2")
);
}
Page testPage;
@BeforeEach
void setup(Page page) {
testPage = page;
}
@ParameterizedTest
@MethodSource("getParameters")
void exampleTest(String url, String locator, String value) {
testPage.navigate(url);
testPage.locator(String.format("input[id='%s']", locator)).fill(value); // <-- failure expected here
testPage.getByRole(AriaRole.BUTTON, new Page.GetByRoleOptions().setName("Search")).click();
}
}
Hi @yury-s @uchagani, any updates?
I can implement this very quickly but right now I'm running into a threading issue with the server that is used in the tests.
@yury-s any chance you would have some time to help look into that? I mentioned it in another issue (will try to link that when I get home)
@yury-s any chance you would have some time to help look into that? I mentioned it in another issue (will try to link that when I get home)
Sure, what's the issue and what are you puzzled about?
@yury-s https://github.com/microsoft/playwright-java/pull/1788#issuecomment-2938333185
I'll try re-running with the DEBUG=pw:protocol this evening and will provide the logs