Our Blog

Quadlogo

Local LLMs: How to Increase Target Customer Segments for Software Applications

Large language models are most commonly used for generative content, contacting large datacenters and transferring information to and from the gargantuan models one would recognize in popular AI: Google Gemini, Microsoft CoPilot, Anthropic’s Claude, and OpenAI’s ChatGPT to name a few. There are many different models and functions of AI that primarily revolve around generative content and cloud based communication between server and user.

But language processing with large, resource intensive models is not the only use for AI. Natural language processing is a mechanism to understand human language, but it is not exclusive to multimillion, or billion dollar, companies. There are many open source large language processing models, such as those hosted on HuggingFace like Qwen 2.5, that can be downloaded in relatively small sizes (under 100 MBs) locally on any device, from mobile to PC. These models do not contact any external servers, do not store any data you enter, and are not affiliated with the large and cumbersome data centers syphoning water and emitting horrific noise pollution located in the U.S. or around the world.

These models aren't for generating images, music, or video, they are simply for processing human language into tasks, and with that there are a variety of uses that can make applications and processes more efficient and easier to use.

One can create their own applications with local language processing models built in so the user can control any function of the application using words. This could be presented as a main feature of the application such as in the SF-1 by Quadracollision, or it can be used as a background feature accessible if necessary. I'll present an example in the SF-1, a mobile synth generation app which features menus of wave types and close to a hundred parameters and effects knobs to manipulate the soundwave.

SF-1 Control Scheme
SF-1 JSON Structure JSON view

In the SF-1 the natural language processing model tweaks knobs and makes menu selections by manipulating code that corresponds to these changes. Then, additionally, the user can make changes to the knobs and menus themselves after the fact to hone in on the sound they’re looking for. Or the user can start off with these manual changes and ignore the natural language processing. This means that the AI model is not actually generating anything but merely manipulating controls that already exist to produce something new.

If the application did not have this natural language processing element, the user would have to choose everything manually, and to someone unfamiliar with what different functions (attack, sustain, decay, envelop, etc.) do to sound or how different soundwaves produce different tones, this task would seem daunting, especially with the amount of options available. Simplifying the control scheme is a solution, but then customer segments which were initially targeted by the vast customizability are excluded.

However, a solution to having a complex control scheme whilst also simplifying the process to customer segments who are unfamiliar with the technology is to use local language processing built into the application itself. This does not detract from the complexity of the app for educated and professional customer segments, but can increase efficiency by generating many different samples quickly. And its benefit toward amateur and uneducated segments of customers is exponential because it means that the application is also built for them regardless of their skill. They can customize and export sounds they like without having to understand individual parameters, and this allows a firm to target these segments in their marketing along with the educated producer looking for new tools.

This concept could be extended to any number of applications that require the user to learn a control scheme, significantly lowering the time and energy necessary to start using the app and ultimately expanding customer segments to target.

2/10/26

Soundfriend Beta is Releasing Today!

Soundfriend is live, and we at Quadracollision are very excited to see what wild creations you can put together. Downloading the Soundfriend beta includes a natural language processing model with an embedded APK file and should work on any Android device.

We made Soundfriend to streamline the music production process, making synth samples easier to acquire and more customizable than ever. While jamming our own music we realized there were gaps between what we wanted in our workflow and what was available to use, so we decided to make it ourselves.

With the rise of generative AI music programs we wanted a new take on the capabilities of language processing models. AI doesn't have to be used for just generation but can instead act as a tool to help the user control what already exists. In the case of Soundfriend, text is translated into knob turns and menu selections to give the user a place to start from or to simplify the process for someone unfamiliar with synths and the language of musical effects and controls, lowering the knowledge gap it takes to use the application and speeding up the process. This means that the model is locally hosted on the user's device and does not connect to an external server. Soundfriend runs without any internet connection, and we don't collect any data the user enters.

Visualize the soundwave and control its parameters after its initial generation. Parameters come in 21 different groupings including effects and processing. Add compression, attack, envelope, delay, and much more. With 57 predetermined wave types ranging from percussion to organs to strings and the option to draw a custom soundwave, the quantity of different types of samples that Soundfriend can produce is exponential. Create chords with the ‘add group’ option or overlay different synth sounds on top of each other to get some crazy tones.

Let us know the creations you make by using the #Soundfriend hashtag. Download Soundfriend on quadracollision.com through Mediafire, and feel free to contact us anytime with concerns, comments, questions, or requests for collaboration.

Jan 18, 2026