1 to 3 of 3 Results
Jun 14, 2025
Jiao, Junfeng; Kevin Chen; Afroogh, Saleh; Murali, Abhejay; Atkinson, David; Dhurandhar, Amit, 2025, "ChildSafeLLM: A Dataset of Child Safety Aligned Evaluation Prompts for Generative AIs", https://doi.org/10.7910/DVN/MRZGNB, Harvard Dataverse, V1
ChildSafeLLM (Child Safety Aligned Evaluation Prompts for Generative AIs) is a developmental benchmark comprising 200 total prompts carefully crafted and curated for two distinct age cohorts—ages 6-12 and ages 7-13. Drawing on evidence-based child-safety guidelines, school curricula, and authentic questions from children and caregivers, the prompts... |
Jun 14, 2025 -
ChildSafeLLM: A Dataset of Child Safety Aligned Evaluation Prompts for Generative AIs
MS Excel Spreadsheet - 22.4 KB -
MD5: 9b684df52564b9e4954671f8fac40fa8
The 13_17_ChildSafeLLM.xlsx file likewise holds 100 rows and 31 columns but at about 22.9 KB, with identical semantics: every _harmful field is binary while its companion _action field assigns a 0–5 risk-category code. |
Jun 14, 2025 -
ChildSafeLLM: A Dataset of Child Safety Aligned Evaluation Prompts for Generative AIs
MS Excel Spreadsheet - 28.1 KB -
MD5: fc94560d2b8d3ff713fd4c47cb851d7b
The 6_12_ChildSafeLLM.xlsx file packs 100 rows and 31 columns into roughly 29.6 KB, where each model-specific _harmful column stores a binary 0/1 safety flag and each paired _action column records a taxonomy label scored 0–5. |