The GenAI Sandbox provides a platform for development of tooling designed to investigate and explore teaching and learning with Generative AI.

LLM Sandbox Architecture
Architecture
Application Layer
GenAI applications/tools are developed in the application layer (shown in blue). Three open source sample applications will be provided to act as a guide: the GraderBot, the TutorBot, and Socrates.
User interaction: The front end – user-facing side – of all the sample applications and the applications that TLEF Applicants create (where appropriate), will provide user interaction, and can make requests directly down to the LLM layer if local/domain-specific context is not required. i.e. a RAG component is not strictly required to access the the LLM Layer.
RAG layer: Some GenAI applications require Retrieval Augmented Generation (“RAG”) to provide domain-specific context to the LLM and to improve the quality of the responses to prompts. Because RAG is specific to every implementation that requires it, RAG will not be included as a standard component of the Sandbox. Sandbox developers will develop their own RAG as part of their projects where it is required. Notably, RAG requires the storage of application-specific context information in a database file and having it developed on a per-application basis will allow faculty to control the storage and deletion of those files, which is the preferred approach to managing intellectual property in RAG contexts. Any intellectual property or personal information held in the application would likely reside within the RAG layer, and if such data is being held, then the application, and the specifics of the storage of this database, will need to be submitted for PIA approval.
LLM Sandbox
Web Service Layer: The front end of the LLM Sandbox platform is an API gateway that will serve requests from applications to the LLM platform. It will communicate prompts to the LLM Platform and return LLM responses back to the application interface. For straightforward privacy and security, the Sandbox will not store user prompts or responses from the LLM.
LLM Platform: Multiple models can be run on the platform, depending on the needs of the application. The default models will be the 8billion parameter version of Meta’s Llama3.1 and the 3.8billion parameter version of Microsoft’s Phi3. Multimodal models, such as ones which support creating images, audio, or video, will not be supported by the LLM Sandbox. If your project intends to use non-text modalities, please seek guidance from the Incubator Support team.
Costs
Costs for running applications that leverage the sandbox will be provided to those applying for TLEF Funding.
Timeline
The LLM Sandbox will be available for the duration of the Large TLEF Special Call, and while a similar service may be available after the potential two year span of the special call, projects should be designed and engineered to be agnostic of the underlying architecture in case of platform change, and should be built with a plan for development independence if continued use of the tooling is anticipated. The LT Hub team is able to help advise about planning around project end and any subsequent migration.