Deep learning is a form of unsupervised machine learning that uses artificial neural networks to make decisions based on presented data and information. Deep learning neural networks make decisions in the same way a human brain does. The difference between deep learning and machine learning is that deep learning algorithms contain deep neural networks, which are many hidden layers of neural networks between data input and a solution output. These decision-making algorithms require large amounts of data and information to make choices. With the rise of accessible data, deep learning algorithms are becoming more prevalent. Deep learning algorithm functions can include image recognition, natural language processing, and voice recognition.
Deep Learning reviews by real, verified users. Find unbiased ratings on user satisfaction, features, and price based on the most reviews available anywhere.
OpenCV is a tool that has has C++, C, Python and Java interfaces and supports Windows, Linux, Mac OS, iOS and Android for computational efficiency and with a strong focus on real-time applications, written in optimized C/C++, the library can take advantage of multi-core processing and enabled to take advantage of the hardware acceleration of the underlying heterogeneous compute platform
Inbenta, a global leader in artificial intelligence, utilizes patented natural language processing technology to provide a highly accurate search solution for customer support, e-commerce and chatbots. Inbenta's semantic search engine understands & delivers results based on the meaning behind customers’ search queries, not the individual keywords, leading to improved customer satisfaction, lower support costs and stronger ROI. The result: industry-leading 90%+ self-service rates.
And this is where Google's deep dream ideas originate. With simple words you give to an AI program a couple of images and let it know what those images contain ( what objects - dogs, cats, mountains, bicycles, ... ) and give it a random image and ask it what objects it can find in this image.
IBM SPSS Text Analytics for Surveys software lets you transform unstructured survey text into quantitative data and gain insight using sentiment analysis. The solution uses natural language processing (NLP) technologies specifically designed for survey text.
Microsoft Computer Vision API is a cloud-based API tool that provides developers with access to advanced algorithms for processing images and returning informatio, by uploading an image or specifying an image URL, it analyze visual content in different ways based on inputs and user choices.
SimpleCV is an open source framework for building computer vision applications, user can get access to several high-powered computer vision libraries such as OpenCV without having to learn about bit depths, file formats, color spaces, buffer management, eigenvalues, or matrix versus bitmap storage.
DeepPy is a MIT licensed deep learning framework that tries to add a touch of zen to deep learning as it allows for Pythonic programming based on NumPy's ndarray,has a small and easily extensible codebase, runs on CPU or Nvidia GPUs and implements the following network architectures feedforward networks, convnets, siamese networks and autoencoders.
NVIDIA Deep Learning GPU Training System (DIGITS) deep learning for data science and research to quickly design deep neural network (DNN) for image classification and object detection tasks using real-time network behavior visualization.
IBM Watson Text to Speech is a service that provides a REST API to synthesize speech audio from an input of plain text, with multiple voices, both male and female, are available across Brazilian Portuguese, English, French, German, Italian, Japanese, and Spanish. Once synthesized in real-time, the audio is streamed back to the client with minimal delay and it enables developers to control the pronunciation of specific words.
IBM Watson Tone Analyzer is a service that uses linguistic analysis to detect three types of tones from text: emotion, social tendencies, and language style, emotions identified include things like anger, fear, joy, sadness, and disgust, identified social tendencies include things from the Big Five personality traits used by some psychologists includi openness, conscientiousness, extroversion, agreeableness, and emotional range and identified language styles include confident, analytical, and tentative.
A software that writes unique content from data. Natural Language Generation software in the cloud: Providing an API for data to written content in any language 1. Powerful text generation The AX Semantics Natural Language Generation Platform creates meaningful written content. You can use it to create any kind of text: product descriptions, news articles, business reports, documentation and much more. This content is perfectly similar to human written content – because it is interpreting your editorial input. 2. Any content - any language We currently provide 24 languages, including but not limited to english (US and GB), german, arabic and chinese. All other languages are available via our early access program. All content is being generated in the native target language – but can be based on polyglotty data, enabling you for example to generate french content from spanish data. 3. A tool for Editors, no Developers needed Our tools give you access to the complete Natural Language Generation toolchain - including all possible grammatical functions, data extraction and detection means and delivery methods. 100% SaaS: Everything is available from your desk via your browser. No programming skills, no SDK needed. 4. Fully automated Content Since everything is provided via API, you can completely automate your content production. Upload data and get the generated text back via easy REST API Calls or use instant Webhooks. We provide real-time content production and scale up to millions of texts per day!
Quill Engage analyzes your Google Analytics data and delivers narrative reports in plain-English on your website performance. Reports are delivered directly to your inbox weekly and monthly and provide in-depth analysis on your site's KPIs, including sessions, pageviews, referral traffic, goals & conversions, events, ecommerce and AdWords.
NLTK is a platform for building Python programs to work with human language data that provides interfaces to corpora and lexical resources such as WordNet, along with a suite of text processing libraries for classification, tokenization, stemming, tagging, parsing, and semantic reasoning, wrappers for industrial-strength NLP libraries, and an active discussion forum.
TextBlob is a Python (2 and 3) library for processing textual data that provides a simple API for diving into common natural language processing (NLP) tasks such as part-of-speech tagging, noun phrase extraction, sentiment analysis, classification, translation, and more.
Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find insights and relationships in text. Amazon Comprehend identifies the language of the text; extracts key phrases, places, people, brands, or events; understands how positive or negative the text is; and automatically organizes a collection of text files by topic.
Azure Face API uses state-of-the-art cloud-based face algorithms to detect and recognize human faces in images.Its capabilities include features like face detection, face verification, and face grouping to organize faces into groups based on their visual similarity.
Azure Translator Speech API, part of the Microsoft Cognitive Services API collection, is a cloud-based machine translation service. The API enables businesses to add end-to-end, real-time, speech translations to their applications or services.
cuda-convnet2 is a fast C++/CUDA implementation of convolutional (or more generally, feed-forward) neural networks that can model arbitrary layer connectivity and network depth, any directed acyclic graph of layers will do it required fermi-generation GPU (GTX 4xx, GTX 5xx, or Tesla equivalent).
Deeplearning4j is the first commercial-grade, open-source, distributed deep-learning library written for Java and Scala it integrated with Hadoop and Spark, to be used in business environments on distributed GPUs and CPUs that aims to be cutting-edge plug and play, more convention than configuration, which allows for fast prototyping for non-researchers.
Frog is an integration of memory-based natural language processing (NLP) that tokenize, tag, lemmatize, and morphologically segment word tokens in Dutch text files, will assign a dependency graph to each sentence, will identify the base phrase chunks in the sentence, and will attempt to find and label all named entities.
Hebel is a library for deep learning with neural networks in Python using GPU acceleration with CUDA through PyCUDA that implements the important types of neural network models and offers a variety of different activation functions and training methods such as momentum, Nesterov momentum, dropout, and early stopping.
IBM Watson Document Conversion is a service that provides an Application Programming Interface (API) that enables developers to transform a document into a new format, it is a single PDF, Word, or HTML document and the output is an HTML document, a Text document, or Answer units that can be used with other Watson services.
IBM Watson Natural Language Classifier is a service that enables developers without a background in machine learning or statistical algorithms to create natural language interfaces for their applications, interprets the intent behind text and returns a corresponding classification with associated confidence levels and the return value can then be used to trigger a corresponding action, such as redirecting the request or answering a question.
LambdaNet is an artificial neural network library written in Haskell that abstracts network creation, training, and use as higher order functions, it provides a framework in which users can: quickly iterate through network designs by using different functional components and experiment by writing small functional components to extend the library
Microsoft Language Understanding Intelligent Service (LUIS) is a service that enable user to quickly deploy an HTTP endpoint that will take the sentences being send and interpret them in terms of the intention they convey and the key entities that are present, it has a web interface that can custom design a set of intentions and entities that are relevant to an application and guide ser through the process of building a language understanding system.
Microsoft Linguistic Analysis APIs is a tool that provide access to natural language processing (NLP) that identify the structure of text and it provides three types of analysis:Sentence separation and tokenization, Part-of-speech tagging and Constituency parsing.
Microsoft Video API is a cloud-based API that provides advanced algorithms for tracking faces, detecting motion, stabilizing and creating thumbnails from video, it allows user to build more personalized and intelligent apps by understanding and automatically transforming video content.
Microsoft Web Language Model API is a REST-based cloud service that provide tools for natural language processing, using this API, users application can leverage the power of big data through language models trained on web-scale corpora collected by Bing in the EN-US market.
MLPNeuralNet is a fast multilayer perceptron neural network library for iOS and Mac OS X that predicts new examples through trained neural networks, it is built on top of Apple's Accelerate Framework using vectored operations and hardware acceleration (if available).
Mocha is a Deep Learning framework for Julia, inspired by the C++ framework Caffe that is efficient implementations of general stochastic gradient solvers and common layers, it could be used to train deep / shallow (convolutional) neural networks, with (optional) unsupervised pre-training via (stacked) auto-encoders.
Natural language Understanding Toolkit (nut) is an implementation of Cross-Language Structural Correspondence Learning (CLSCL)
Apache OpenNLP library is a machine learning based toolkit for the processing of natural language text that supports the common NLP tasks, such as tokenization, sentence segmentation, part-of-speech tagging, named entity extraction, chunking, parsing, and coreference resolution these tasks are usually required to build more advanced text processing services and includes maximum entropy and perceptron based machine learning.
An Advanced NLG platform, Quill learns and writes like a human—understanding user intent and performing the relevant data analysis to deliver Intelligent Narratives that empower enterprises to make better decisions, focus talent on higher-value opportunities, create new or differentiate existing products, and realize untapped potential.
Part-Of-Speech Tagger (POS Tagger) is a piece of software that reads text in some language and assigns parts of speech to each word (and other token), such as noun, verb, adjective, etc., although generally computational applications use more fine-grained POS tags like 'noun-plural'.
TokensRegex is a generic framework defining patterns over text (sequences of tokens) and mapping it to semantic objects represented as Java objects thta emphasizes describing text as a sequence of tokens (words, punctuation marks, etc.), which may have additional attributes, and writing patterns over those tokens, rather than working at the character level, as with standard regular expression packages.
Stanford Topic Modeling Toolbox (TMT) brings topic modeling tools to social scientists and others who wish to perform analysis on datasets that have a substantial textual component, it has the ability to import and manipulate text from cells in Excel and other spreadsheets, train topic models (LDA, Labeled LDA, and PLDA new) to create summaries of the text, select parameters (such as the number of topics) via a data-driven process and generate rich Excel-compatible outputs for tracking word usage across topics, time, and other groupings of data.
Stanford Word Segmenter currently supports Arabic and Chinese that provided segmentation schemes have been found to work well for a variety of applications the system requires Java 1.8+ to be installed, it recommend at least 1G of memory for documents that contain long sentences. For files with shorter sentences (e.g., 20 tokens), decrease the memory requirement by changing the option java -mx1g in the run scripts.
textacy is a Python library for performing higher-level natural language processing (NLP) tasks, built on the high-performance spaCy library that has tokenization, part-of-speech tagging, dependency parsing, etc. offloaded to another library, textacy focuses on tasks facilitated by the ready availability of tokenized, POS-tagged, and parsed text.
Treat is a toolkit for natural language processing and computational linguistics in Ruby that build a language- and algorithm- agnostic NLP framework for Ruby with support for tasks such as document retrieval, text chunking, segmentation and tokenization, natural language parsing, part-of-speech tagging, keyword extraction and named entity recognition.
APEX is an AI-enhanced technology platform intended to provide solutions for your business end to end. With APEX you gain access to the same powerful AI capabilities and tools used by the tech unicorns at a fraction of the cost. APEX allows you to realize the full benefits of the AI technologies, while sustaining governance, flexibility, scalability, tool compatibility, and collaboration. Through the integration of the most advanced open source and proprietary 2021.AI technological components, APEX enhances data governance, increases maintainability and quality of the AI models. APEX can be installed either on-premises, or consumed in private or public cloud. APEX offers 3 editions: Front, Go, and Enterprise, all capable of delivering immediate business value for companies of all sizes, in all the stages of AI maturity and ambitions.
NeuroIntelligence is a neural networks software application designed to assist neural network, data mining, pattern recognition, and predictive modeling experts in solving real-world problems. NeuroIntelligence features only proven neural network modeling algorithms and neural net techniques; software is fast and easy-to-use
Arria NLG Platform is a platform that combines cutting-edge techniques in data analytics, artificial intelligence and computational linguistics, it analyses large and diverse data sets and automatically writes tailored, actionable reports on what's happening within that data, with no human intervention, at vast scale and speed.
BPN-NeuralNetwork is a Machine Learning that implemented 3 layers ( Input Layer, Hidden Layer and Output Layer ) neural network and implemented Back Propagation Neural Network (BPN), QuickProp theory and Kecman's theory (EDBD). KRBPN can be used in products recommendation user behavior analysis, data mining and data analysis .
Captricity’s AI-powered automation enables paper to travel at the speed of digital. Captricity is used by eight of the top ten U.S. insurance companies and other enterprises to extract and enhance data from any customer channel—including handwritten documents—and deliver it seamlessly into downstream business systems.
CCV is a open source/cross-platform solution for blob tracking with computer vision. that can interface with various web cameras and video devices as well as connect to various TUIO/OSC/XML enabled applications and supports many multi-touch lighting techniques including: FTIR, DI, DSI, and LLP with expansion planned for the future vision applications (custom modules/filters).
Cogito API is a ready to deploy and fully configured API series that helps developers accelerate creation and deployment of unique applications that leverage large volumes of unstructured information from multiple sources. Cogito API is easily deployed or integrated for faster evaluation and analysis of content such as web pages, social media data or any big data sets or real-time information streams.
Cortical.io has wrapped its Retina Engine into an easy-to-use, powerful platform for fast semantic search, semantic classification and semantic filtering that can process any kind of text, independently of language and length it enables user to process terabytes of data orders of magnitude faster than other methods.
CRF++ is a simple, customizable, and open source implementation of Conditional Random Fields (CRFs) for segmenting/labeling sequential data, CRF++ is designed for generic purpose and will be applied to a variety of NLP tasks, such as Named Entity Recognition, Information Extraction and Text Chunking.
Datumbox API offers a large number of off-the-shelf Classifiers and Natural Language Processing services which can be used in a broad spectrum of applications including: Sentiment Analysis, Topic Classification, Language Detection, Subjectivity Analysis, Spam Detection, Reading Assessment, Keyword and Text Extraction and more.
IBM Watson Language Translator is a service that provides domain-specific translation utilizing Statistical Machine Translation techniques, it offers multiple domain-specific translation models, plus three levels of self-service customization for text with very specific language.
Kapiche uses the power of Natural Language Processing to analyse your unstructured data, letting you get on with the process of creating recommendations. Be it open survey responses, online reviews, or social media, unstructured data is the key to knowing what your customers want. However, drawing this information into a readily understood format can be difficult and time consuming. That’s where Kapiche fills the gap.
LingPipe is a tool kit for processing text using computational linguistics that is used to do tasks like: Find the names of people, organizations or locations in news, Automatically classify Twitter search results into categories and Suggest correct spellings of queries.
MeTA is a modern C++ data sciences toolkit that allow text tokenization, including deep semantic features like parse trees, inverted and forward indexes with compression and various caching strategies, a collection of ranking functions for searching the indexes, topic models, classification algorithms, graph algorithms, language models, CRF implementation (POS-tagging, shallow parsing), wrappers for liblinear and libsvm (including libsvm dataset parsers), UTF8 support for analysis on various languages and .multithreaded algorithms
MetaEyes is a reporting service that with the help of image recognition technology analyzes Instagram (plus other services) photos, revealing a wealth of actionable information. MetaEyes can detect: • Faces & Demographics • Explicit/Racy Content • Sentiment • Scenes & Objects • Logos • Locations • Celebrities • Other info Powerful reporting tools to analyze and filter data in myriads of ways. • Summary and granular reports • Filter detailed reports by all MetaEyes attributes • View by date range and different sorting methods • Export as PDF or CSV files for further analysis Photos provide a far more refined insight allowing for novel ways of engagement. •Discover potential fans by attributes previously undetectable • Surface and engage with previously ghost influencers • Find user-generated brand photos and directly contact to use in marketing campaigns • Monitor for brand related crisis situations and engage directly to avoid negative virality
MobileEngine makes it easy for you to add image recognition to your app. You provide a reference database of images (e.g. artwork, consumer packaged goods, book covers, catalog pages, etc.) and when your users photograph that object, MobileEngine finds your matching reference image.
Multi-Perceptron-NeuralNetwor is a Machine Learning that implemented multi-layer perceptrons neural network (MLP)and Back Propagation Neural Network (BPN), it designed unlimited hidden layers to do the training tasks and can be used in products recommendation, user behavior analysis, data mining and data analysis.
MXNet is a Flexible and Efficient Library for Deep Learning that supports both imperative and symbolic programming, calculates the gradient automatically for training a model, runs on CPUs or GPUs, on clusters, servers, desktops, or mobile phones and supports distributed training on multiple CPU/GPU machines, including AWS, GCE, Azure, and Yarn clusters.
nDimensional's product, nD, is a full stack application development Platform-as-a-Service (PaaS) that allows companies to rapidly design, develop, deploy and operate AI, big data and IoT applications. nD empowers data-rich industries to drive real-time actions that quantitatively improve business outcomes, proven at enterprise scale, to bring the power of predictive insights and optimization to all vertical markets and value chains.
Neuroph is lightweight Java neural network framework that develop common neural network architectures, it contains well designed, open source Java library with small number of basic classes which correspond to basic NN concepts and has s GUI neural network editor to quickly create Java neural network components.
NeuroSolutions is a neural network software for Windows that combines a modular, icon-based network design interface with an implementation of advanced artificial intelligence and learning algorithms using intuitive wizards or an Excel interface it perform cluster analysis, sales forecasting, sports predictions, medical classification, and much more
Natural Language Processing for JVM languages (NLP4J) provides a tools readily available for research in various disciplines, Frameworks for fast development of efficient and robust NLP components and API for manipulating computational structures in NLP (e.g., dependency graph).
Omnitraq extracts critical business insights through our award-winning and patented technology from call center calls, web media, video, audio, and text data. By delivering these insights at low cost, with speed, and at scale, Omnitraq can provide both SMB and Enterprise clients with a suite of affordable and high impact BI tools.
Phrasetech provides a scalable end-to-end solution for content creation, control and optimization. Our Natural Language Generation system (NLG), powered by innovative AI technology, enables the production of engaging conversational texts at unprecedented volumes and superb quality.
RSNNS is a Neural Networks in R using the Stuttgart Neural Network Simulator (SNNS) a library containing many standard implementations of neural networks, this package wraps the SNNS functionality to make it available from within R. Using the RSNNS low-level interface, all of the algorithmic functionality and flexibility of SNNS can be accessed and contains a convenient high-level interface, so that the most common neural network topologies and learning algorithms integrate seamlessly into R.
Sonix is an online platform that combines automated transcription and editing. We built the world's first AudioText Editor™ that allows users to edit audio in a revolutionary new way: Edit audio by editing text. Sonix integrates with Adobe Audition, Adobe Premiere, Final Cut Pro, Audacity, and Hindenburg.
Stanford CoreNLP provides a set of natural language analysis tools that can give the base forms of words, their parts of speech, whether they are names of companies, people, etc., normalize dates, times, and numeric quantities, and mark up the structure of sentences in terms of phrases and word dependencies, indicate which noun phrases refer to the same entities, indicate sentiment, extract open-class relations between mentions, etc.
Stanford Phrasal is a statistical phrase-based machine translation system, written in Java that provides much the same functionality as the core of Moses it include: providing an easy to use API for implementing new decoding model features, the ability to translating using phrases that include gaps (Galley et al. 2010), and conditional extraction of phrase-tables and lexical reordering models.
Stanford Pattern-based Information Extraction and Diagnostics (SPIED) is a pattern-based entity extraction and visualization that provides code for two components, Learning entities from unlabeled text starting with seed sets using patterns in an iterative fashion and Visualizing and diagnosing the output from one to two systems.
Synthesys is a solution that adds the brainpower of thousands of people to a team. by reading through all data and highlights the important people, places, organizations, events and facts being discussed, resolve highlighted points and determines what's important, connecting the dots together and figures out what the final picture means by comparing it with the opportunities, risks and anomalies that are looking for.
Tregex is a utility for matching patterns in trees, based on tree relationships and regular expression matches on nodes (the name is short for "tree regular expressions"). Tregex comes with Tsurgeon, a tree transformation language. Also included from version 2.0 on is a similar package which operates on dependency graphs (class SemanticGraph, called semgrex.
Ucto is a tool that tokenizes text files: it separates words from punctuation, and splits sentences, it offers several other basic preprocessing steps such as changing case that can all use to make text suited for further processing such as indexing, part-of-speech tagging, or machine translation.
VLFeat is an open source library that implements popular computer vision algorithms specializing in image understanding and local features extraction and matching, it include Fisher Vector, VLAD, SIFT, MSER, k-means, hierarchical k-means, agglomerative information bottleneck, SLIC superpixels, quick shift superpixels, large scale SVM training, and many others. It is written in C for efficiency and compatibility, with interfaces in MATLAB for ease of use, and detailed documentation throughout. It supports Windows, Mac OS X, and Linux.
VoiceBase is defining the future of deep learning and communications by providing unparalleled access to spoken information for businesses to make better decisions. With flexible APIs developers and enterprises build scalable solutions with VoiceBase by embedding speech-to-text, conversational analytics, and predictive analytics capabilities into any big voice application. VoiceBase’s customers include Amazon Web Services, Twilio, Nasdaq, HireVue and Veritone. The company is privately held and is based in San Francisco, California.