$ duckdb f.db -c 'COPY table1 TO table1.csv;COPY table1 TO table1.parquet;'

on my machine that i did the basic run, the one in the link is way more faster.

``` $ time ./duckdb_cli-linux-amd64 ./basic_batched.db -c "COPY user TO 'user.csv'" 100% (00:00:20.55 elapsed)

real 0m24.162s user 0m22.505s sys 0m1.988s ```

``` $ time ./duckdb_cli-linux-amd64 ./basic_batched.db -c "COPY user TO 'user.parquet'" 100% (00:00:17.11 elapsed)

real 0m20.970s user 0m19.347s sys 0m1.841s ```

``` $ time cargo run --bin parquet --release -- basic_batched.db user -o out.parquet Finished `release` profile [optimized] target(s) in 0.11s Running `target/release/parquet basic_batched.db user -o out.parquet` Database opened in 14.828µs

SQLite to Parquet Exporter ========================== Database: basic_batched.db Page size: 4096 bytes Text encoding: Utf8 Output: out.parquet Batch size: 10000

Exporting table: user Output file: out.parquet

   user: 100000000 rows (310.01 MB) - 5.85s (17095636 rows/sec)
Export completed successfully! ========================== Table: user Rows exported: 100000000 Time taken: 5.85s Output file: out.parquet Throughput: 17095564 rows/sec File size: 310.01 MB

real 0m6.052s user 0m10.455s sys 0m0.537s ```

``` $ time cargo run --bin csv --release -- basic_batched.db -t user -o out.csv Finished `release` profile [optimized] target(s) in 0.03s Running `target/release/csv basic_batched.db -t user -o out.csv`

real 0m6.453s user 0m5.252s sys 0m1.196s ```