|
| 1 | +# Editing datasets |
| 2 | + |
| 3 | +The [Hub](https://huggingface.co/datasets) enables collaborative curation of community and research datasets. We encourage you to explore the datasets available on the Hub and contribute to their improvement to help grow the ML community and accelerate progress for everyone. All contributions are welcome! |
| 4 | + |
| 5 | +Start by [creating a Hugging Face Hub account](https://huggingface.co/join) if you don't have one yet. |
| 6 | + |
| 7 | +## Edit using the Hub UI |
| 8 | + |
| 9 | +> [!WARNING] |
| 10 | +> This feature is only available for CSV datasets for now. |
| 11 | +
|
| 12 | +The Hub's web interface allows users without any technical expertise to edit a dataset. |
| 13 | + |
| 14 | +Open the dataset page and navigate to the **Data Studio** tab to begin editing. |
| 15 | + |
| 16 | +<div class="flex justify-center"> |
| 17 | +<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/data_studio_button-min.png"/> |
| 18 | +<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/data_studio_button_dark-min.png"/> |
| 19 | +</div> |
| 20 | + |
| 21 | +Click on **Toggle edit mode** to enable dataset editing. |
| 22 | + |
| 23 | +<div class="flex justify-center"> |
| 24 | +<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/toggle_edit_button-min.png"/> |
| 25 | +<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/toggle_edit_button_dark-min.png"/> |
| 26 | +</div> |
| 27 | + |
| 28 | +<div class="flex justify-center"> |
| 29 | +<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/edit_cell_button-min.png"/> |
| 30 | +<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/edit_cell_button_dark-min.png"/> |
| 31 | +</div> |
| 32 | + |
| 33 | +Edit as many cells as you want and finally click **Commit** to commit your changes and leave a commit message. |
| 34 | + |
| 35 | + |
| 36 | +<div class="flex justify-center"> |
| 37 | +<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/commit_button-min.png"/> |
| 38 | +<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/commit_button_dark-min.png"/> |
| 39 | +</div> |
| 40 | + |
| 41 | +<div class="flex justify-center"> |
| 42 | +<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/commit_message-min.png"/> |
| 43 | +<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/commit_message_dark-min.png"/> |
| 44 | +</div> |
| 45 | + |
| 46 | +## Using the `huggingface_hub` client library |
| 47 | + |
| 48 | +The `huggingface_hub` library can manage Hub repositories including editing datasets. |
| 49 | + |
| 50 | +For example here is how to edit a CSV file using the [Hugging Face FileSystem API](https://huggingface.co/docs/huggingface_hub/en/guides/hf_file_system): |
| 51 | + |
| 52 | +```python |
| 53 | +from huggingface_hub import hffs |
| 54 | + |
| 55 | +path = f"datasets/{repo_id}/data.csv" |
| 56 | + |
| 57 | +with hffs.open(path, "r") as f: |
| 58 | + content = f.read() |
| 59 | +edited_content = content.replace("foo", "bar") |
| 60 | +with hffs.open(path, "w") as f: |
| 61 | + f.write(edited_content) |
| 62 | +``` |
| 63 | + |
| 64 | +You can also apply edit locally on your disk and commit the changes: |
| 65 | + |
| 66 | +```python |
| 67 | +from huggingface_hub import hf_hub_download, upload_file |
| 68 | + |
| 69 | +local_path = hf_hub_download(repo_id=repo_id, path_in_repo= "data.csv", repo_type="dataset") |
| 70 | + |
| 71 | +with open(path, "r") as f: |
| 72 | + content = f.read() |
| 73 | +edited_content = content.replace("foo", "bar") |
| 74 | +with open(path, "w") as f: |
| 75 | + f.write(edited_content) |
| 76 | + |
| 77 | +upload_file(repo_id=repo_id, path_in_repo=local_path, repo_type="dataset") |
| 78 | +``` |
| 79 | + |
| 80 | +> [!TIP] |
| 81 | +> |
| 82 | +> To have the entire dataset repository locally and edit many files at once, use `snapshot_download` and `upload_folder` instead of `hf_hub_download` and `upload_file` |
| 83 | +
|
| 84 | + |
| 85 | +Visit [the client library's documentation](/docs/huggingface_hub/index) to learn more. |
| 86 | + |
| 87 | +## Integrated libraries |
| 88 | + |
| 89 | +If a dataset on the Hub is compatible with a [supported library](./datasets-libraries), loading, editing, and pushing the dataset takes just a few lines. |
| 90 | + |
| 91 | +Here is how to edit a CSV file with Pandas: |
| 92 | + |
| 93 | +```python |
| 94 | +import pandas as pd |
| 95 | + |
| 96 | +# Load the dataset |
| 97 | +df = pd.read_csv(f"hf://datasets/{repo_id}/data.csv") |
| 98 | + |
| 99 | +# Edit |
| 100 | +df = df.apply(...) |
| 101 | + |
| 102 | +# Commit the changes |
| 103 | +df.to_csv(f"hf://datasets/{repo_id}/data.csv") |
| 104 | +``` |
| 105 | + |
| 106 | +Libraries like Polars and DuckDB also implement the `hf://` protocol to read, edit and write files on Hugging Face. And other libraries are useful to edit datasets made of many files like Spark, Dask or 🤗 Datasets. See the full list of supported libraries [here](./datasets-libraries) |
| 107 | + |
| 108 | +For information on accessing the dataset on the website, you can click on the "Use this dataset" button on the dataset page to see how to do so. |
| 109 | +For example, [`samsum`](https://huggingface.co/datasets/knkarthick/samsum?library=datasets) shows how to do so with 🤗 Datasets below. |
| 110 | + |
| 111 | +<div class="flex justify-center"> |
| 112 | +<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-usage.png"/> |
| 113 | +<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-usage-dark.png"/> |
| 114 | +</div> |
| 115 | + |
| 116 | +<div class="flex justify-center"> |
| 117 | +<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-usage-modal.png"/> |
| 118 | +<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-usage-modal-dark.png"/> |
| 119 | +</div> |
| 120 | + |
| 121 | +## Only upload the new data |
| 122 | + |
| 123 | +Hugging Face's storage is powered by [Xet](https://huggingface.co/docs/hub/en/xet), which uses chunk deduplication to make uploads more efficient. |
| 124 | +Unlike traditional cloud storage, Xet doesn't require the entire dataset to be re-uploaded to commit changes. |
| 125 | +Instead, it automatically detects which parts of the dataset have changed and instructs the client library only to upload the updated parts. |
| 126 | +To do that, Xet uses a smart algorithm to find chunks of 64kB that already exist on Hugging Face. |
| 127 | + |
| 128 | +Let's see our previous example with Pandas: |
| 129 | + |
| 130 | +```python |
| 131 | +import pandas as pd |
| 132 | + |
| 133 | +# Load the dataset |
| 134 | +df = pd.read_csv(f"hf://datasets/{repo_id}/data.csv") |
| 135 | + |
| 136 | +# Edit part of the dataset |
| 137 | +df = df.apply(...) |
| 138 | + |
| 139 | +# Commit the changes |
| 140 | +df.to_csv(f"hf://datasets/{repo_id}/data.csv") |
| 141 | +``` |
| 142 | + |
| 143 | +This code first loads a dataset and then edits it. |
| 144 | +Once the edits are done, `to_csv()` materializes the file in memory, chunks it, asks Xet which chunks are already on Hugging Face and which chunks have changed, and then uploads only the new data. |
| 145 | + |
| 146 | +## Optimized Parquet editing |
| 147 | + |
| 148 | +The amount of data to upload depends on the edits and the file structure. |
| 149 | + |
| 150 | +The Parquet format is columnar and compressed at the page level (pages are around ~1MB). |
| 151 | +We optimized Parquet for Xet with [Parquet Content Defined Chunking](https://huggingface.co/blog/parquet-cdc), which ensures unchanged data generally result in unchanged pages. |
| 152 | + |
| 153 | +Check out if your library supports optimized Parquet in the [supported libraries](./datasets-libraries) page. |
| 154 | + |
| 155 | +## Streaming |
| 156 | + |
| 157 | +For big datasets, libraries with dataset streaming features for end-to-end streaming pipelines are recommended. |
| 158 | +In this case, the dataset processing runs progressively as the old data arrives and the new data is uploaded to the Hub. |
| 159 | + |
| 160 | +Check out if your library supports streaming in the [supported libraries](./datasets-libraries) page. |
0 commit comments