Our Technology

Better Decisions -- 100 Times Faster

Computers have a hard time understanding language

Typically, software only looks at which words are used in a given piece of text.

Unfortunately, this information is incomplete: "Apple is better than Samsung" is very different from "Samsung is better than Apple", even though they use the same words.

Making correct decisions requires that we take into account the structure of documents and sentences.

A Different Approach

Our software represents sentences in a very different way, as a single, fixed-length, distributed vector of numbers constructed by our patented MBAT system. This vector includes information on words, sentence structure, and document structure. Similar sentences and documents result in similar vectors.

The biggest advantage of this representation is that it is ideal for using machine learning to create predictive models based upon the text

Why This Helps

MBAT vectors can capture, compare, and use information about word, sentence, and document structures. The way we represent text helps computers learn models from examples 100 times faster.

Selected Publications On Text & Machine Learning

  • Gallant, S. I. & Culliton, P. (2016). Positional Binding with Distributed Representations. ICIVC 2016, Portsmouth, England, August 3-5, 2016 (pdf)
  • Gallant, S. I. & Okaywe, T. W. (2013) Representing Objects, Relations, and Sequences. Neural Computation 25, 2038-2078. (abstract/order) (late draft)
  • Caid WR, Dumais ST and Gallant SI. Learned vector-space models for document retrieval. Information Processing and Management, Vol. 31, No. 3, pp. 419-429, 1995. (pdf)
  • Gallant, S. I. Neural Network Learning and Expert Systems. M.I.T. Press. 380 Pages, March 1993 (ISBN 0-262-07145-2).
  • Gallant, S. I. Context Vectors: A Step Toward a "Grand Unified Representation". Wermter, Stefan and Sun, Ron (Eds.) Hybrid Neural Systems. Springer Berlin Heidelberg: Lecture Notes in Computer Science Volume 1778, 2000, pp 204-210. (pdf-scanned)
  • Gallant, S. I., W. R. Caid, et al. Feedback and Mixing Experiments With MatchPlus. Harman, D. (ed), The Second Text REtrieval Conference (TREC-2), NIST publication SP 500-215, Washington, DC., Nov. 4--6, 1992, pg. 101-104. (pdf-scanned) (text)
  • Gallant, S. I., Caid, W. R., Carleton, J., Hecht-Nielsen, R., Pu Qing, K., & Sudbeck, D. HNC's MatchPlus System. Harman, D. (ed), The First Text REtrieval Conference (TREC-1), NIST publication SP 500-207, Washington, DC., Nov. 4--6, 1992, pg. 107-111. (pdf-scanned)
  • Gallant, S. I. A Practical Approach for Representing Context And for Performing Word Sense Disambiguation Using Neural Networks. Neural Computation, Vol. 3, No. 3, 1991, 293-309. (abstract/order) Conference version: IJCNN-91, Seattle, Washington, July 8-12, 1991. (pdf-scanned)
  • Gallant, S. I., & Smith, D. (1987). Random cells: An idea whose time has come and gone. . .and come again? In Proceedings of the IEEE International Conference on Neural Networks (Vol. 2, pp. 671-678). Piscataway, NJ: IEEE. (pdf-scanned)
  • Gallant, S. I. and King, D. J. Experiments with Sequential Associative Memories. Cognitive Science Society Annual Conference, Montreal, August 17-19, 1988, 40-47. (pdf-scanned)
  • Gallant, S. I. Words and weights: what the network's parameters tell the network's programmers. International Symposium on Integrating Knowledge and Neural Heuristics, Pensacola Beach FL., May 9-10, 1994, pg. 21-24. (pdf-scanned)
  • Gallant, S. I. Three Constructive Algorithms for Network Learning. Eighth Annual Conference of the Cognitive Science Society, Amherst, Ma., Aug. 15-17, 1986, 652-660. (pdf-scanned)