Ruslan Yushchenko
Ruslan Yushchenko
Hi Archana, After some research, we found that cp930 has 1 byte representation for Katakana, while all other characters are represented by 2 bytes. Currently, Cobrix is having an assumption...
Sorry, we don't have the write feature at the moment. It is in the plans, but probably won't be soon. Regarding Japanese charset - we can implement support for it,...
Does cobrix support justified, rename clause, and P in PIC clause which scales up and down the value
Yes, I think we added support for `P` some time ago. What's a justified rename clause? A couple of examples may help to understand if we support such PICs or...
Does cobrix support justified, rename clause, and P in PIC clause which scales up and down the value
We don't support these clauses at the moment. We might add it to our plans.
By default, Cobrix retains the root GROUP by putting all columns under the corresponding struct field. You can use a different schema retention polity to get your columns on the...
The `option("schema_retention_option","collapse_root")` should make a difference. Try to compare outputs of `df.printSchema`.
Regarding parsing of individual fields, this example maybe helpful to you: ```scala import za.co.absa.cobrix.cobol.parser.CopybookParser import za.co.absa.cobrix.cobol.parser.ast.{Group, Primitive} val copybookContents = """ 01 RECORD. 05 A1 PIC X(5). 05 A2 PIC...
Regarding schema flattening, if you have a nested dataframe, you can convert it to a flat dataframe using `SparkUtils.flattenSchema(df)` Examples are in http://github.com/AbsaOSS/cobrix/blob/ec600f549e00ec3cfd4025353bfeec78acf7b532/spark-cobol/src/test/scala/za/co/absa/cobrix/spark/cobol/utils/SparkUtilsSuite.scala#L81-L81
Btw, The above code can be simplified. I've written it like that to emphasize that decoder is a function returned from 'decode()' So ```scala val decoderForA1 = a1.decode val decoderForA2...
You are welcome!