Machine  Amplified  Human  Intelligence

Speech recognition demo only works on Chrome.


At the heart of MAHI.

At MAHI, we don't just build artificial intelligence. We work to use machine learning - and raw computing power - to enhance human knowledge.


We studied how conversations flow, topics change, and ideas develop in order to develop an algorithm that can extract themes from a text.

The task was immense and the solution more so. We taught a computer to take strings of characters, whether it came from an article or conversation, and figure out the topic or the theme.

We also take it a step further and built the MAHI engine to intelligently predict what other topics you may want to also know about for information discovery.


Before a writing a single line of code, we started by analyzing both written and conversational English in order to understand how to best interpret text.

The MAHI algorithm uses statistics, calculus, probability, and set theory to isolate and relate the most relevant topics in a passage. And the more it analyzes, the better it gets.

Statistics lets the algorithm compare the current text analysis to previous ones and choose the correct interpretation and context.

Calculus helps the MAHI engine track the rate of change in topics and theme on a meta level to understand the direction of the thought.

Probability is used to predict related topics and background information most relevant to the analyzed text.

Set theory helps us determine a baseline theme for a piece of analyzed text to establish a context for interpretation.


Once MAHI figures out what the topic is - a restaurant, movie, concept, object, company - it accesses the correct information provider to bring you key facts from the most relevant sources, saving you time from having to find it yourself.

With the MAHI API which is currently underway, you can integrate the text to knowledge engine in your stack within in your own applications and access information about it using your custom information providers.


We refined the production text to knowledge algorithm Θ(n) so that it runs up to 15 times faster than the prototype algorithm.

This was made possible by redesigning key data structures, parallel processing, and profiling the run times of each option, while trying to find the optimum balance between efficiency and accuracy. This improvement also enabled us run our application in real-time, allowing the user to receive results right away.


Under the hood.

Parallel Processing

The MAHI engine is designed to process the same layers of the algorithm topology in parallel to increase efficiency.


MAHI has built in error handling, self-correction, and eliminates the reliance on punctuation when analyzing text.

Over 90% of the MAHI code base is written in Python and it is what powers the text to knowledge algorithms. Our modular architecture allows us to refine components of the MAHI engine as we continue to develop the platform.

The topics and relations data are stored using MongoDB. MongoDB allows us to develop faster and write more complex queries for dynamic data access on previously stored information.

PHP is used to interface with the server and access the database. It is used to load previously stored topics in a conversation and save new ones.

The final output webpage for the MAHI speech to knowledge demo (below) is in HTML 5, CSS 3, and Javascript to manipulate elements and display loaded content.


The power of text to knowledge.

We used the MAHI engine to create a tool that is straight out of science fiction.

Using audio from your microphone, MAHI uses speech recognition to transcribe a conversation, lecture, or presentation into text that the engine then analyzes. Once MAHI figures out what is being talked about, it outputs subjects spoken and similar topics that may interest you.

You can explore the topics that MAHI found in the conversation to really get a feel for the power and robustness of the system. We are working on opening up the MAHI API to allow you to integrate MAHI's power into your own stack so that you can create futuristic applications.

We integrated our tool with other API's so that once we get the topics from MAHI, we can pull together relevant data from the right sources to bring information right to you.

Speech recognition demo only works on Chrome.


Together everyone achieves more.


Brown University '17


Boston University '17