Skip To Main Content

News

DA Unveils Generative AI Framework for Faculty and Staff

DA Unveils Generative AI Framework for Faculty and Staff

By Dylan Howlett

12-minute read

In the late fall of 2022, Durham Academy made a discovery that was at once alarming and alluring: Albert Einstein was living in its basement. 

That is, at the very least, how Henrik Kniberg sees it. The Swedish author and artificial intelligence executive says as much in an 18-minute YouTube video titled “Generative AI in a Nutshell.” In November 2022, OpenAI released a public-facing version of its generative AI chatbot. It was called ChatGPT. Its arrival, Kniberg says, was tantamount to Einstein securing permanent residence in the cellars of homes and businesses and schools. There it was — the sum of all human knowledge — placed within arm’s reach of anyone, anywhere, anytime. ChatGPT and its Einsteinian metaphor inspired a profusion of strong reactions, including at DA, where Trevor Hoyt, the director of information technology, began fielding inquiries that spanned a veritable electromagnetic spectrum: of curiosity and befuddlement, of outrage and excitement. What is this thing? Should we be using it? 

Those reasonable queries endured some three years later, when DA educators in multiple divisions watched the Kniberg video as part of the school’s efforts to introduce a generative AI framework for faculty and staff. The answer as to whether faculty should use it, Kniberg says, resonates with a pre-kindergarten-through-12 institution known for its curiosity-driving students, life-changing teachers and future-thinking initiatives: Engage with AI, for the theoretical physicist in the basement could come in handy for moral, happy and productive learners.

“Schools can’t put their heads in the sand,” says Associate Head of School Kristen Klein. And so DA isn’t. 

In the spring of 2023, Hoyt and his staff in the Office of Information Technology invited any AI-intrigued staff members to share their earliest resources and findings about generative AI via Microsoft Teams. There was enough interest to convene an in-person discussion in the fall of 2023, when about 30 staff members gathered. Some decried the arrival of widely accessible chatbots as a disaster, and one that DA should avoid altogether. Others felt compelled to learn more. The consensus landed somewhere in the middle, and with a question: How can educators know whether we can accept or reject something from generative AI?

Hoyt and Julian Cochran, the Upper School computer science teacher and technology coordinator, vowed to find a protective sandbox in which DA educators could experiment with generative AI. They chose Flint — an AI personalized learning platform designed specifically for K-12 education — in spring 2024 to support instruction. It was, for some of the teachers who used it, a revelation. 

Cochran’s computer science classes are a syntax-heavy discipline rife with errors and bugs, many of which he can’t resolve for 20 students who each require five minutes of attention within a 75-minute class period. He started instructing his students to place their code in Flint to see if the program could offer debugging support. Students soon discovered why their code was amiss in the first place, and how they could avoid similar pitfalls in the future. That freed Cochran to spend dedicated time with coders who needed more extensive guidance.

“Instead of putting out little fires all over the place, it helped foster a deeper sense of learning.” 

Julian Cochran
Upper School Computer Science Teacher and Technology Coordinator

On the first day of his ADV Computer Science: Advanced Data Structures course this spring, Cochran told his 10 students that he had taught the class in Java for two years. But he wondered aloud if they’d want to learn C++, a high-level programming language that is still widely used for application and mainframe development. His students responded enthusiastically and affirmatively. Cochran blanched. I haven’t touched this language in 20 years, he thought. But the best thing a high school computer science program can achieve, he says, is to send students to college with experience in multiple programming languages. He took plans and rubrics for labs that were designed for Java, fed them into Flint and asked how he could refine them for C++. What extra things do I need to be prepared to teach this lesson? Where can you push me? Where can you push my students in a higher-level class? He even used Flint to assist in the creation of Google Forms that students could use as a method of peer evaluation. “It’s been a nice way as a teacher to professionally refine some things I’m doing,” Cochran says.

The refinement has spread to other divisions. Stephanie Rudolph, a Middle School world languages teacher and academic leader, told attendees at a recent Middle School faculty meeting that her first foray into AI was less than ideal. Nothing harmful befell her or her students, Rudolph said: She just didn’t have enough experience with the tools. But Rudolph stuck with it and soon found Flint’s gift for differentiated instruction. The program can, for instance, engage a student in a conversation that meets the speaker at their precise level of fluency. Hoyt and his team have also approved Copilot, the Microsoft-designed AI software that doesn’t use any user-generated data to train its models, for use among faculty. 

That was, in many ways, the objective of the DA Generative AI Board of Trustees Committee when it convened for the first time in the fall of 2024: How could generative AI, in all of its predictive splendor and unavoidable prevalence and abundant minefields, empower DA educators and equip students with requisite literacy for an AI-forward future? The answer, in part, is a framework that gives faculty the confidence and prerogative to experiment with generative AI.

“To prepare our students for moral, happy, productive lives in an unscripted future, we will engage Generative AI (GenAI) through strategic and curiosity-driven experiments to accelerate and enrich learning, teaching, operational efficiency and community life. With appropriate protections for data privacy and an emphasis on ethical decision-making, we will maintain the centrality of critical thinking, individual creativity, authentic relationships and human thriving.”

Generative AI Vision Statement
Durham Academy

The bedrock of such confidence rests in a traffic light framework. Christian Lundblad, the committee’s co-chair, has dutifully watched the advance of generative AI from multiple perspectives: as a DA trustee, a DA parent, a parent of a DA alumnus and a university administrator. The senior associate dean for faculty and research at UNC-Chapel Hill’s Kenan-Flagler Business School drew from professors at UNC and other institutions to create a set of accessible, specific guardrails that would encourage DA faculty to engage with AI — and without fear of endangering data security or ethical use. “Green Light” refers to low-risk applications and free experimentation, such as enhancing learning materials, generating assessments and producing routine communications. “Yellow Light” indicates moderate risk, while “Red Light” warns against significantly risky usage, including sensitive data handling, surveillance and bias. 

“All along, you try to be really mindful of where we started this conversation as a committee, which is all of this work is meant to accelerate and facilitate student engagement and the development of critical thinking. None of this is meant to replace either teacher expertise or the student component of human development.” 

Dr. Christian Lundblad
DA Generative AI Committee

“If we have a mission about ‘moral, happy and productive lives,’ and if we have a strategic vision that has pillars that are ‘prepared for life’ and ‘innovate more boldly’ and ‘meet the needs of our learners,’ we would be crazy not to be embracing AI as a tool,” Klein said. “Our posture is that the potential upside of AI when used as a collaborator — still with a human driver, but as a support and collaborator — will improve our quality of work and our outcomes.”

That human driver, like most great things at DA, begins with faculty. Six AI committee members discussed the rationale behind the framework and their hopes for its impact: empowering faculty, equipping students and establishing house rules for the figurative savant in the basement.
 

The following responses have been lightly edited for clarity and brevity.

Generative AI is a collaborator, not a replacement — nor a substitute for critical thinking.

Kristen Klein

Klein: “When you look at the framework, the idea of AI as a help and a collaborator to a piece of work or a project that is driven by human fitness — that’s really the antidote to unethical use. You’re still centering humanity. I think we can have conversations with kids around what it means to produce art in a world where AI exists. What does this mean for human creativity? That’s the work that we need to be doing: engaging students in those conversations and demystifying the use of AI as an appropriate collaborator. What we need to stay away from is allowing kids to substitute AI for building their own critical thinking skills. It’s a real challenge for us as faculty.”

Hoyt: “There is the concern that students will just use AI to write a paper. But I think it can help them with their entire workflow. If they’re assigned a project, for instance, it can help them brainstorm: what they should focus on, how they should approach their opening paragraph. Instead of producing an end product, generative AI can reveal the growth of how they got from the start to the finish — and I think that’s important. That will show their teachers that they’ve really engaged in this project and tried to learn something. They didn’t just go out and look for an output.”

Melissa Mack, Middle School STEAM teacher & digital learning coordinator: “I think some kids think it’s magic, and it’s going to do my work for me. It doesn’t replace critical thinking. You need to have those tools to be able to engage with AI appropriately, to say, ‘I don’t actually like your suggestions because that isn’t what I’m trying to convey here.’ It will take time to reimagine the possibilities of how it might help foster creativity and critical thinking without replacing it.”

Lundblad: “The interesting piece is as the technology is changing, as the landscape is changing, how do we make sure that we’re actually integrating it into the classroom to leverage it — to augment the capacity for critical thinking, to take some tedium off the plate and actually let students think a little bit more deeply about the parts where they should be?”

Alivia Kliesen

Alivia Kliesen, philanthropy services manager and AI Committee member: “There’s this conception that all students know everything about AI, and they’re using AI every second, because they’re born into a more tech-savvy generation. And that’s just not true. Students are just as diverse in their perspectives on AI as adults. There are students who have ethical questions about AI. There are students who are absolutely afraid to touch it because they’re convinced that any use of AI is cheating. There are students who are probably a little heavy with AI usage on their assignments. But I think we shouldn’t just clump the younger generation into this one box and say they all love AI, and they all know how to use it, and they’re using it all the time. It’s just not true. There are things they have to learn.”

Cochran: “If you go to prompt engineering, you’re saving energy resources because you get straight to the heart of the matter with that AI when you need help from it. I’ve worked with an AI to figure out how to retool all of my rubrics and what I want my kids to do in my class so they don’t take the PDF of that rubric and just drop it in an AI and say, ‘Write the paper for me.’ How do you change those outcomes to move students from seeing it as a way to cheat and just solve problems easily to ‘Oh, wow, this is a useful resource and learning tool for me?’ That’s a really long journey. But we don’t really get tired of that conversation.”

Klein: “There are really interesting questions about how it can enhance learning, and questions about how we build a more humane world with the existence of this technology taken into account. Those are the conversations we should be having with kids.”

Cochran: “I think the one thing that we do really, really well here at DA — and continue to do really, really well here at DA — is that fostering of human connections. I can’t replicate that from AI. I think we have to keep that in mind and recognize the value of the human connection, and get people to see that this could be another threat to that. But I think we’re doing our community and our families a disservice if we don’t try and learn more about why that is a threat. If what you really value is that process of learning, that process of teaching, that process of learning how to examine history or literary analysis and how to write a five-page paper about it, that’s still there, and that’s still really important for our students.”

AI can reduce administrative tasks and return to teachers their most sacred resource: time to work one-on-one with students. 

Hoyt: “My hope is that more teachers will realize the effects that Julian Cochran has realized: that Flint and AI are giving them time back to work one-on-one with students and get to know their students better.”

Klein: “Thinking beyond the realm of teaching students how to use this technology, it can create efficiencies in teacher practices. It frees them up to do that: It allows them to spend more time on the most human things that they do and less time on their own screens: administrivia, paperwork, writing emails. I think the potential is huge, and I think part of what we need to do is to stay up to date on the pitfalls that we should watch out for. We’re doing that.” 

Kliesen: “I hope it frees up time and energy for people to be working in what they truly think is most impactful for them, whatever that means to them as an individual.”

Dr. Christian Lundblad

Lundblad: “What I’d actually like to see is more faculty figuring out what level of engagement and what kinds of things they might like to do at a minimum to maybe make their lives a little bit easier. And that doesn’t just mean passing off grading, but it might mean lesson planning or brainstorming or something else that can kind of get you a little further down the runway — and then at maximum, there might be some things that you learn or that you get from others that you hadn’t thought of that allow you to thread it into the classroom and actually augment the experience in a way that you couldn’t have before the advent of this technology. So play. Learn. And then if there’s something you like, lean in. If there’s something you don’t like on an informed basis, say so, and why. It’s really about engagement.

How educators apply the DA framework to their classroom — beyond engaging with it — is entirely up to them. Skeptics are welcome.

Mack: “Everybody’s in a different place with it. All of those views are totally valid and important. It’s only through robust conversation about how we do things in the classroom that we become better teachers, which is really the goal.”

Julian Cochran

Cochran: “Everybody is allowed to be a skeptic. I hope the framework helps those doubts, those anxieties that faculty might have. Nothing in the framework says you have to use it in your class. But I feel very strongly that as a teacher, I’m a lifelong learner. I really struggle to write things off without trying to learn them first. Everything in the framework says we encourage you to try. Learning together is a really powerful thing, and I hope the framework really facilitates some learning together. That’s a critical piece. Even the enthusiast needs to understand why the doubter has doubts.”

Lundblad: “I think it’s incumbent upon teachers to figure it out. It’s perfectly acceptable for them to say, ‘Hey, I spent a bunch of time. I really understand what these things do, and I might be OK with ‘C,’ but I don’t want ‘A’ and ‘B’ in my classroom at all. It’s fine, but really engage.”

Melissa Mack

Mack: “I know for us in the Middle School, educators have been more inclined to try more things when they know programs have been vetted. I think that once people start trying, you start to see more possibilities of either places where you realize, ‘This could be really helpful,’ or also thinking about, ‘OK, these are things that I'd want to make sure that if I was using it with kids, I'd want to talk about so that they understand how the technology works.’ I think that through being an informed user, you are an empowered user to make decisions about how and when you choose to use it. Finding that for our teachers helps them to empower their kids. A lot of that generative AI vision statement is about that — about helping our kids be prepared for the future. You've got to start somewhere. If teachers are feeling apprehensive about trying it for themselves, they’re likely to feel a little less sure about trying it in their classroom with kids. But people have been really open and receptive while also being a little cautious. The goal has never been, ‘Everyone must be doing AI at this particular moment,’ but rather trying to build some understanding and literacy and trying to see how it, in some ways, helps us be more efficient.”

Cochran: “I think at a certain point, ‘embrace’ is not the right word. ‘Acknowledge’ is probably the best word to use. Acknowledge that it’s there. Acknowledge that it’s a watershed moment for education and educators.”

Introducing students to AI is inseparable from DA’s charge to prepare students for life.

Klein: “We are already hearing a high level of expectation from hiring managers who say they’re already leaning toward candidates who have demonstrated AI capability. Schools can’t put their heads in the sand about this because our students are going to need these capabilities to be able to compete in the job market in four years. I think we ignore that at our peril, and I think we can lean in in ways that are really developmentally appropriate. And we have built — with this framework — good guardrails around data privacy and ethical use.”

Trevor Hoyt

Hoyt: “It’s going to be an important skill when they get out into the real world. More and more, companies are asking their employees, ‘Can you do the thing we need you to do?’ Yes, there are your grades and your transcript. But does that really mean you know how to do what a company needs you to do? I think companies are getting better at assessing that. Our hope is that students will be able to show proficiency in using this tool and focusing on its uses as a tool for their own learning.”

Lundblad: “What students are going to need to be prepared to confront as they venture out into the world are the tasks and challenges for any profession. It’s not that those are changing so fundamentally: Rather, it’s that the toolbox that we use to try to tackle those questions is on hyperspeed right now. For me, that’s the interesting part: It’s figuring out how to experiment in a way that gives students some degree of proficiency and allows them to use it effectively while making sure that we’re pushing them along in a way that’s getting them ready to hit the ground running.”

Hoyt: “I would want DA families to know that we see the big picture of how generative AI can be used wisely if we teach students the right way, and we also see the pitfalls of overuse as a society. We really want to strike a middle ground of teaching a valuable skill that students are going to need to learn how to use for the rest of their lives, but without becoming overly reliant and dependent on it.”

Lundblad: “It’s going to be a part of who we are, and threaded in what we do. We’re not going to shy away from the world and put our heads in the sand. That doesn’t mean there aren’t going to be pockets where we’re going to be very thoughtful about where to preclude its use. But I felt like a lot of the initial reaction that I saw around the educational landscape was very much about shutting things down. That’s neither realistic nor to the benefit of our kids. We’re going to have to navigate this space. So how do we teach them to navigate it and maybe even leverage it while also being thoughtful about where and when and how?”

Klein: “I think that humans are too often driven by fear — and whenever you are in a fear-based place, you don’t make the best decisions. So how do we put our brains together and say, ‘All right: We need to get out of a fear-based space so that we can make the best decisions for kids?’” 

Mack: “In the end, that’s what all of us want: the best experience for kids and to prepare them for the world they’re going into.”
 



Durham Academy AI Committee

Committee Co-Chairs
Trevor Hoyt, Director of Information Technology
Dr. Christian Lundblad, Board of Trustees Treasurer, DA Parent & Parent of Alumnus

  • Kristen Klein, Associate Head of School
  • Melissa Pfeil, Board of Trustees Chair & Parent of DA Alumni
  • Debbie Dibbert, Board of Trustees Vice Chair & Parent of DA Alumni
  • Dr. Ronnie Chatterji, Trustee & DA Parent
  • Julian Cochran, Upper School Technology Academic Leader and Computer Science Teacher & Parent of Alumni
  • Alivia Kliesen, Philanthropy Services Manager & Generative AI Implementation Specialist
  • Melissa Mack, Middle School STEAM Teacher and Digital Learning Coordinator