Local LLMs: How to Increase Target Customer Segments for Software Applications
Large language models are most commonly used for generative content, contacting large datacenters and transferring information to and from the gargantuan models one would recognize in popular AI: Google Gemini, Microsoft CoPilot, Anthropic’s Claude, and OpenAI’s ChatGPT to name a few. There are many different models and functions of AI that primarily revolve around generative content and cloud based communication between server and user.
But language processing with large, resource intensive models is not the only use for AI. Natural language processing is a mechanism to understand human language, but it is not exclusive to multimillion, or billion dollar, companies. There are many open source large language processing models, such as those hosted on HuggingFace like Qwen 2.5, that can be downloaded in relatively small sizes (under 100 MBs) locally on any device, from mobile to PC. These models do not contact any external servers, do not store any data you enter, and are not affiliated with the large and cumbersome data centers syphoning water and emitting horrific noise pollution located in the U.S. or around the world.
These models aren't for generating images, music, or video, they are simply for processing human language into tasks, and with that there are a variety of uses that can make applications and processes more efficient and easier to use.
One can create their own applications with local language processing models built in so the user can control any function of the application using words. This could be presented as a main feature of the application such as in the SF-1 by Quadracollision, or it can be used as a background feature accessible if necessary. I'll present an example in the SF-1, a mobile synth generation app which features menus of wave types and close to a hundred parameters and effects knobs to manipulate the soundwave.
JSON view
In the SF-1 the natural language processing model tweaks knobs and makes menu selections by manipulating code that corresponds to these changes. Then, additionally, the user can make changes to the knobs and menus themselves after the fact to hone in on the sound they’re looking for. Or the user can start off with these manual changes and ignore the natural language processing. This means that the AI model is not actually generating anything but merely manipulating controls that already exist to produce something new.
If the application did not have this natural language processing element, the user would have to choose everything manually, and to someone unfamiliar with what different functions (attack, sustain, decay, envelop, etc.) do to sound or how different soundwaves produce different tones, this task would seem daunting, especially with the amount of options available. Simplifying the control scheme is a solution, but then customer segments which were initially targeted by the vast customizability are excluded.
However, a solution to having a complex control scheme whilst also simplifying the process to customer segments who are unfamiliar with the technology is to use local language processing built into the application itself. This does not detract from the complexity of the app for educated and professional customer segments, but can increase efficiency by generating many different samples quickly. And its benefit toward amateur and uneducated segments of customers is exponential because it means that the application is also built for them regardless of their skill. They can customize and export sounds they like without having to understand individual parameters, and this allows a firm to target these segments in their marketing along with the educated producer looking for new tools.
This concept could be extended to any number of applications that require the user to learn a control scheme, significantly lowering the time and energy necessary to start using the app and ultimately expanding customer segments to target.
2/10/26