frictionless-r
frictionless-r copied to clipboard
R package to read and write Frictionless Data Packages
Parquet files are now supported by Frictionless Framework: ``` $ frictionless describe data/*.parquet name: table type: table path: data/table.parquet scheme: file format: parquet mediatype: appliction/parquet schema: fields: - name: id...
`example_package` currently has a remote `$directory`: ```R > library(frictionless) > example_package$directory [1] "https://raw.githubusercontent.com/frictionlessdata/frictionless-r/main/inst/extdata" ``` That means an internet connection is need to read resources. This was by design for some...
frictionless-py now allows to read directly from Zenodo: ``` Package("https://zenodo.org/record/7078768") # Package.from_zenodo() under the hood ``` Would be nice if frictionless-r could do the same. Note, it can already read...
Changelog: [resource.$schema (new)](https://datapackage.org/overview/changelog/#resourceschema-new) - [ ] Check `$schema` - [ ] If not provided, look for `profile` (see also #221) - [ ] If no `profile`, assume v1 - [...
A created schema will only have the field properties `name`, `type` and (sometimes) `constraints`. I see it as fairly common to add more properties, such as `description`, `required` etc. It...
Add function to add common properties to `descriptor` file, e.g.: ```R package
`read_resource()` supports reading from [inline data](https://specs.frictionlessdata.io/data-resource/#data-inline-data), but that feature is currently marked as `experimental` because it completely ignores `schema`. Would be good to support schema, but likely involves a hefty...
Not all resources have a table schema. `read_resource()` should allow this, but warn users.
See https://github.com/tidyverse/readr/issues/1335, could be done by mutating such columns with `format_ISO8601(period)` before calling `read_csv()`. - [ ] Also update `create_schema()` to use `type`= `duration` for periods.
The documentation of `?write_package()` states in the description section: "Writes a Data Package and its related Data Resources to disk as a datapackage.json and CSV files. Already existing CSV files...