The tool automatically generates both positive and negative professional email responses by analyzing your inbox messages using AI, allowing you to choose and edit before sending through your regular email client.
Hi,
I built an AI-powered email composer that integrates with your email inbox via IMAP. The system:
Reads incoming emails from your inbox
Uses an LLM (Large Language Model) to automatically compose two response options:
A professional positive response
A professional negative response
You can edit either response before sending. Once sent, the email syncs with your mailbox’s sent folder through IMAP, ensuring it appears in your regular email client (whether you use webmail, Thunderbird, or other email tools).
For the LLM API I used gemini during the dev process and then I switched to Mistral for production. (I can hot swap if needed - to any suited LLM api including self hosted stuff)
Note: There’s a demo video available, but please be aware it’s in German.
I didn’t understand a word but I watched the demo and got the gist! By “hot swap” do you mean the LLM is configurable in the app? Or do you mean you would have to recompile your binary?
There is no need for recompile if hot swap is implemented.
In general you can swap all LLM API’s as long as you do not use the specific features. Frankly, I never needed any of them, they all feel like they are just there to vendor lock the inexperienced developer in. There is a bit of adaption needed stuff like writing specifc variants of prompts or maybe also swap tokenizers, if you need them, but it is not too bad.
I have developed another app. This time for desktop. Translation and text fixing. And yes, it works with all LLM APIs I am aware of. Of course, it also works with DeepSeek, even though some people constantly act as if that were somehow special.
(This one runs locally with ollama. qwen2.5:14b performance is OKyish)
That sounds like a great tool for streamlining email responses! Using AI to generate both positive and negative professional replies can save a lot of time while ensuring a polished tone. Integrating it with IMAP for seamless inbox and sent folder syncing is a smart approach.
How has the transition from Gemini to Mistral impacted performance or response quality? Also, any plans to support multilingual email composition?
How has the transition from Gemini to Mistral impacted performance or response quality?
Performance does not matter that much, because the calls are made from the server before the user even checks the app. The response proposals are already pre generated at the time the user checks for them.
I did not check the response quality in detail, but for a real product this needs to be done. I am sure that by working on the prompts, the quality can be very good with every model. But in general: The bigger and better the model, the better can the result be.
Also, any plans to support multilingual email composition?
At the moment there is nothing planned. I do stuff like this as commercial for me and my skills, also I show that Go is actually a great choice for backend services. (It is much better than Python. It is not even close)
But in general it is easy to add every language that the LLM model is actually trained with.
Hence: If you want to add language support for a more unusual language, you are maybe limited to a reduced number of models to choose from.