Frequently asked questions
-
When you use BevSage, you get access to our databases, our rubrics, our custom prompts, and our models - all designed specifically for beverage marketing.
By embedding these into our system, we ensure that queries always provide sophisticated answers to your questions. Without all of this ‘‘back-end’, ChatGPT or other LLM (Large Language Model) enquiries provide much less sophisticated or complete results. They are also much more prone to LLM ‘hallucinations’ (or making stuff up).
In addition, we are able to choose which models we use to tweak the back-end of the models - we might use ChatGPT-mini to return fast answers on simple questions, or Gemini2.5pro to provide sensible answers when we turn the creativity dials like we do for the creative storytelling module.
See the story on why we are different in the Insights section below.
-
Yes – you use it just like Chat GPT, it is just supercharged. You can use the context windows to undertake any search you want, but is is constrained by the tools you use. We have chosen a particular LLM model and knowledge bases for you to use.
On the Growth and Globalist tiers we have built an embedded ChatGPT model in as a Tool, so you can go to ChatGPT and it will behave exactly as you are already used to. And you can search for anything.
Additionally, on the Growth and Globalist tiers, we have built private models. All of the Tools on Starter use public models and tools.
-
Plans are the only way you can access our tools. They are not really subscriptions, they are a 12 month access program to our tools.
Services are consulting projects provided in partnership with Hydra Consulting that rely on a range of AI tools, including BevSage, but also on Hydra’s proprietary models.
Hydra Consulting’s tools form the basis of both the BevSage Tools and the Services.
-
It is powerful to be able to use your own data to support enquiries and research in BevSage. Most of our models allow the addition of your data to the enquiries. We also have folders where you can create deep and ongoing enquiries based on your knowledge, our knowledge and the knowledge generated as you create enquiries to the folder. It is an ever-expanding knowledge base that remembers everything you have ever done in the folder and can access that at any time in the future.
This improves the connection of your existing information into all of the work you will do on BevSage.
The types of data you can add include:
Your existing Tech and Fact Sheets.
Your research.
Your media kit.
Any market research that you have access to.
Our tools are all text-based. At the moment we can’t read graphs into the knowledge bases only text, so it is important that the data you load into the knowledge base is text based, that is, highly descriptive.
Tools to read graphs and figures are emerging and we will add these as soon as they are reliable.
You add your knowledge by selecting workspaces on the left menu bar then choosing knowledge in the top menu. Just create a knowledge (see the user manual), give it a name and it is yours to use forever.
By default, all knowledge that you add is private to only you. No-one else can access your knowledge, unless you share it into a public model. All of our models should be assumed public unless it is specifically noted that they are private. Only use private company data in private tools. All tools on the Starter tier are public - data is loaded onto external tools.
-
We take your data privacy and security seriously. Our services are hosted in Australia, ensuring compliance with Australian Privacy Principles (APPs) under the Privacy Act 1988 (Cth). Our data protection practices are designed to meet or exceed the requirements of both Australian Government security standards and the EU General Data Protection Regulation (GDPR).
All data is encrypted in transit and at rest, and stored on highly available, secure infrastructure to protect against loss, misuse, and unauthorised access.
Use of AI Models
We use both public and private AI models to enhance our services:
Public AI Models: When you choose to use features powered by public AI models, your data may be processed outside our controlled environment. While your data is not used for AI training, it may be logged by the AI vendor and used internally by them for system optimisation or monitoring purposes.
Private AI Models: Data processed through our private AI models remains entirely within our secure environment. It is not shared externally or reused for any purpose other than delivering the requested service.
Web Searches and Scraping
When performing web searches or web scraping, your data is not exposed or transmitted externally. These operations are isolated from your stored or submitted information.
Data Sharing
Your data will never be shared with third parties unless you have explicitly authorised us to do so. The only exception to this is when you use public AI model features, as described above.
-
Our backend, TonsleyAI, has a process for running our models which helps us to simplify the process of sophisticated answers to our pre-set prompts or your queries adapted specifically as an enquiry engine for BevSage.
Within that you are able to save and use your own data when you run both public and private tools and models.
Your data is stored in your private knowledge repository and is only applied to any analysis if you add it.
It stays within our environment unless you attach it to a search - the data is never sent by us outside our secure TonsleyAI enclave. You should attach private data only in the private models when they become available.
Your knowledge is safely stored. You have to make sure you only use private data in the private models.
Compare this with ChatGPT or other LLMs where your data is always drawn into their systems where it can be used for model training and is always ‘seen’ by these global models.
-
Yes.
If you are using BevSage for export market development, the percentage of export market use can be claimed on EMDG.
You have to make sure that you pay for the service during your eligible claim period.
BevSage is not strictly a subscription. We are actually selling access to Hydra Consulting’s tools, databases, rubrics, prompts, and models on a 12-monthly basis for users to access for their market research, planning, and story/UVP development. There is no automatic renewal.
General software or platform subscriptions that are not clearly export promotion services are ineligible. So make you are clear with Austrade and your advisors that BevSage is access to market development tools and services that provide deep access to market knowledge and market entry planning.
Tech talk
-
BevSage uses two types of Tools and Models:
Public – our Tools and Models are both public and private. They are all marked as such in the instructions above the prompt.
Private – these are only available on the Growth and Global tiers. All the tools on the Starter tier use public models.
There are enormous benefits to using public models and tools as we can optimise queries, undertake data scrapes to enhance our knowledge, find relevant competitor pricing, return suggested trade partners, provide deeper tailored recommendations.
The benefits of private tools and models are that you can use our knowledge to enhance your data.
BevSage’s public tools and models send our data out into public domains and models. What does this mean:
It means the tools are optimised using models like ChatGPT and Gemini outside of our environment – we send data to them.
We get the benefit of public data search integration from the web in addition to our own knowledge bases for live market monitoring and real-time insights built on their training data.
We do not permit them to use our data for model training, model improvement, or troubleshooting.
We do not allow data retention.
We use end-to-end encryption.
We do not provide them with any of your information, other than what you put in prompts or add to queries as your knowledge.
We host in GDPR compliant data facilities.
When you are using a public model:
Do not upload any data from your own knowledge that is company confidential.
Upload only data that is public domain or non-confidential.
The types of knowledge you can upload are: Tech Sheets and product data sheets; Marketing materials; Media packs; Blogs and stories about you.
The types of knowledge you should not use in public model are: Purchased market data; Market data from behind industry body paywalls; Data purchased from external sources; Confidential business data.
When you load your own knowledge, we recommend that you create two areas of knowledge storage – one private and one public so you never accidentally attach private knowledge to a public tool.
Do not load financial data into BevSage. It is a marketing, not a financial tool.
-
A RAG, or Retrieval-Augmented Generation, is a way of making large language models (LLMs) like ChatGPT much smarter and more accurate by letting them look things up instead of guessing.
Imagine you’re having a conversation with someone who’s brilliant at writing and explaining ideas (the LLM), but whose memory only goes up to a certain point. A RAG system gives that person access to a private library of fresh, trustworthy information. When you ask a question, the system does two things:
Retrieves relevant documents, reports, or snippets from that library or database.
Feeds those snippets into the model before it starts writing an answer.
The “retrieval” part acts like a custom search engine. The “generation” part is what the model does naturally, writing and reasoning with words. Together, they produce answers that are grounded in real data instead of relying on whatever the model remembers from its training.
So, if you asked a plain LLM “What are the current export rules for Australian wine?”, it might rely on outdated knowledge. A RAG-enabled LLM could first fetch the latest EMDG guidelines from your company’s files or Austrade’s site, then use that retrieved text to answer correctly and cite the source.
In short:
LLM alone = a clever writer with a limited memory.
RAG system = that same writer with instant access to an up-to-date filing cabinet.
RAGs make LLMs far more reliable for business intelligence, compliance work, and any domain where accuracy and freshness matter.
-
Large language models hallucinate when they confidently make things up.
They do this because they don’t actually know facts. Instead, they predict words based on patterns in the data they were trained on. When a model faces a question it hasn’t seen before, or when its memory is fuzzy, it fills in the gaps by generating something that sounds right rather than something that is right.
Think of it like a very fluent person bluffing their way through a quiz: the language flows perfectly, but the substance might be wrong. The model isn’t lying; it simply has no built-in sense of honesty. Truth and fiction are just probabilities to it.
Here’s how to picture that behaviour. LLMs are like puppies, trained to please, not to be truthful. They’ve learned from endless examples of human conversation, so they know what sounds friendly, helpful, or clever. Their “creativity dial” (called temperature) decides how playful or cautious they are: turn it up and they invent, turn it down and they stick closer to the facts. That same creative spark that lets them write poetry or tell jokes also tempts them to imagine details that don’t exist. They wag their metaphorical tails, eager to make you happy, whether or not they’re right.
But they also behave like teenagers doing homework with limited time. Every model has a token limit, its word budget, and when that runs out, it starts rushing. Answers can get patchy or half-baked as it tries to finish before the clock runs out. Keeping prompts short and clear gives it more room to think calmly instead of scribbling through the final paragraph.
And like over-curious pups, LLMs love chasing rabbits down burrows. If your prompt is vague, they’ll wander after side topics or shiny ideas that sound interesting but miss your point. A precise, well-structured question keeps them on the leash and focused on the trail you actually care about.
Finally, an LLM is like a bridge without guardrails—capable of taking you somewhere remarkable but also prone to veering off course. That’s why systems like Retrieval-Augmented Generation (RAG) exist. RAGs add those guardrails by feeding the model verified information before it starts answering, keeping its creativity grounded in real data.
LLMs don’t understand honesty—they understand likelihood. That makes them powerful writing companions but unreliable witnesses. Treat them not as oracles but as bright, enthusiastic assistants who work best with clear instructions, good data, and the occasional gentle tug on the leash.
Insights
Why we built BevSage
We love using market data and research, but it is rarely specific to an SME. Most focuses on delivering products that the largest number of people can use. And it doesn’t help with the big issues of market entry like pricing, finding trade partners, identifying consumers, telling your story.
We wanted BevSage to produce results that are relevant for you:
Helps you to enter or grow your markets.
Your channels or trade partners.
Your target consumers.
Your target consumption occasions.
Your price-points.
Your UVP or brand story.
We wanted our market research to tell you exactly how you were positioned in a market, who you could target, how you could pitch yourself to that particular trade partner, how you could create exciting activations for your partners or consumers. You. Always you. Not producers from Australia, from your region, with your product category. You.
That is why we maintain such huge databases of story models, personas, and so much information about megatrends and microtrends. It helps you find your niche.
We give you actual direction and signposts. Not just information.
Hydra Consulting and Moots Technology build BevSage on Moots’ TonsleyAI platform because we could. All of our experience on supporting beverage companies to grow has been poured into this platform. We hope you love it.