Florida has historically led the nation when it comes to educational innovation. We aren’t afraid to embrace new technology, provided it expands student potential, strengthens classroom learning, and mitigates foreseeable harms.
But as generative AI begins to redefine how our students learn, research, and even form relationships, we find ourselves at a crossroads: How do we balance our commitment to innovation with the need to protect young people from the harms that exist in today’s digital environment?
Recently, federal policy has dominated this conversation. President Trump signed an executive order on AI to prevent a patchwork of state regulations that could slow American competition with China. Importantly, that same order includes a vital exception that allows states to implement policies intended to protect children.
This is an opening Florida must step into. While the state must remain competitive on a global stage, that competition cannot come at the expense of children’s safety or the integrity of schools. A fragmented approach — where a student’s security depends on whether a school district has the technical expertise to set its own rules — is untenable.
Florida needs a statewide, uniform strategy for procuring and using AI tools, and time is short. Adopting and implementing clear policy recommendations will ensure students are protected while giving educators access to tools that can improve classroom outcomes.
The first priority must be securing student data. Statewide guidance should explicitly prohibit the use of personally identifiable student information for training or improving corporate AI models. A child’s digital footprint should not become fuel for a company’s algorithm.
Transparency from AI platforms doing business with Florida schools must also be required. Platforms should maintain auditable records of student interactions and implement safeguards to identify accuracy errors, bias, and safety risks. Any tool that interacts directly with students must include mechanisms to flag improper use and enable adult intervention. Parents should also be informed whether, and to what extent, generative AI platforms are used in instruction or required for student participation.
Beyond classroom tools, the most urgent challenge is the rise of human-like AI chatbots. These platforms enable minors to interact with AI designed to simulate emotion and maintain long-term relationships. Blurring the line between reality and simulation is dangerous — and can be deadly.
Such bots can encourage isolation or dispense harmful advice without the judgment of a real person or parental oversight. The risks posed by unregulated social media algorithms to youth mental health are well-documented, and Florida should not wait for a similar crisis involving AI companions. The state should strictly regulate minors’ access to these platforms and require clear warning and disclosure labels for all users.
By establishing statewide guardrails, requiring proper training, and limiting minors’ exposure to harmful human-like AI chatbots, Florida can ensure artificial intelligence serves students rather than exploits them. Addressing this issue would protect Florida’s children and serve as a model for the rest of the nation.
___
Nathan Hoffman is the Senior Legislative Director for the Foundation for Florida’s Future.