* feat: expose BlockKeyCacheSize and enable WriteThrough when bloom filter disabled
* import/config: add BatchMaxSize and BatchMaxNodes
* config: make BlockKeyCacheSize an OptionalInteger
* config: add and wire datastore.WriteThrough option
* config: omitempty on BlockKeyCacheSize
* changelog: rewrite entry about new options for the datastore
* config: add docs for BatchMaxNodes and BatchMaxSize
* config: make WriteThrough an optional Flag
* changelog: improve description of new datastore/import options
* refactor: DefaultWriteThrough as bool
* chore: boxo v0.26.0
* docs: config and changelog fixes
License: MIT
Signed-off-by: Jeromy <jeromyj@gmail.com>
make things super customizable
License: MIT
Signed-off-by: Jeromy <jeromyj@gmail.com>
better json format
License: MIT
Signed-off-by: Jeromy <jeromyj@gmail.com>
Migrate to new flatfs
License: MIT
Signed-off-by: Jakub Sztandera <kubuxu@protonmail.ch>
To test it, set up an S3 bucket (in an AWS region that is not US
Standard, for read-after-write consistency), run `ipfs init`, then
edit `~/.ipfs/config` to say
"Datastore": {
"Type": "s3",
"Region": "us-west-1",
"Bucket": "mahbukkit",
"ACL": "private"
},
with the right values. Set `AWS_ACCESS_KEY_ID` and
`AWS_SECRET_ACCESS_KEY` in the environment and you should be able to
run `ipfs add` and `ipfs cat` and see the bucket be populated.
No automated tests exist, unfortunately. S3 is thorny to simulate.
License: MIT
Signed-off-by: Tommi Virtanen <tv@eagain.net>