Generative AI Plugin for Unreal Engine (C++ Study Project)
What It Does?
The plugin connects Unreal Engine NPCs to:
🌐 Online models like OpenAI’s GPT (via API)
💻 Offline local models using llama.cpp (runs on the player’s machine)
LINK TO MY POST (Also describes my steps how I built it)
The Example Video:
Why This Was Challenging ?
Complex Integration - Connecting Unreal Engine’s C++ systems with external LLMs (both cloud-based and local) required deep knowledge of both Unreal’s architecture and how LLMs work. It wasn’t just a plug-and-play setup - I had to handle performance, memory, threading, and error handling carefully.
Multiple Backends - Supporting both online and offline models added extra layers of difficulty. Llama.cpp works very differently from an online API, so I had to design a flexible system that could switch between them.
Customizable & Usable - I wanted the plugin to be useful to designers (not just programmers), so I focused on making it easy to configure, with clear options and settings inside Unreal Engine.
What I Learned
-
Research & Criteria Setting - As part of this study project, I had to research multiple technical solutions, compare them and set clear criteria for what the plugin should achieve. This taught me how to balance feasibility, performance, and user needs when planning a project.
-
Technical Skill Growth - I significantly improved my C++ and Unreal Engine skills, especially in areas like plugin development, engine integration and working with external libraries.
-
Optimization Thinking - I had to make sure the plugin runs efficiently even under stress, especially when using local models.
-
Study Planning & Reflection - Throughout the project, I set weekly goals, evaluated my progress and adapted my plans when challenges came up. I also reflected on what worked and what I’d do differently next time.
