AI Safety

How could I best use this opportunity? (AI Safety)

A university staffer gets greenlit to tackle AI existential risk with faculty.

Deep Dive

A staff member at a top 25 public research university has gained approval from a senior administrator to launch a temporary, interdisciplinary project addressing the existential threat of AI. The project, set to begin in a month when a third of their hours free up, must involve 1-2 senior faculty members in a limited capacity, with the bulk of the work (10-15 hours/week) handled by the staffer and research assistants. The staffer's background is in social sciences, communications, and pedagogy, so the project must avoid advanced math (precalc or lower).

The staffer seeks feasible, non-advocacy directions that foster critical thinking about AI safety, avoiding activism. They emphasize interdisciplinary collaboration and want ideas that senior faculty will support. This opportunity highlights a growing institutional interest in AI safety research, even outside technical departments. The staffer's constraints—limited math requirements and a focus on pedagogy—suggest a project that could educate or analyze AI risks from a social science or communications perspective, potentially involving curriculum development, public outreach, or ethical frameworks.

Key Points
  • Funding greenlit for interdisciplinary AI safety project at top 25 research university.
  • Project requires 1-2 senior faculty with minimal time; 10-15 hours/week from RAs.
  • Staffer's background in social sciences/communications limits math to precalc level.
  • Project must avoid advocacy, focus on critical thinking about AI existential risk.

Why It Matters

This shows growing academic interest in AI safety, even from non-technical disciplines.