share
Stack OverflowWhat are good examples of solutions to neural network problems?
[+162] [21] knorv
[2009-10-13 12:15:39]
[ machine-learning artificial-intelligence neural-network evolutionary-algorithm ]
[ http://stackoverflow.com/questions/1559843/what-are-good-examples-of-solutions-to-neural-network-problems ] [DELETED]

I'd like to know about specific problems that have been solved using artificial neural network techniques and what libraries/frameworks you used if you didn't roll your own.

Questions:

I'm looking for first-hand experiences, so please do not answer unless you have that.

(4) Having gone through all the answers, I'd say most people used them academically, with a handful using them to solve real problems. Very interesting. - ruipacheco
(1) I'd answer this question but i haven't actually used them yet ^-^ i do plan to write an adaptive game AI soon though... - RCIX
(2) I tend to use a natural neural network for problem solving. - Stephen C
I'm going to use it to solve a data audit problem, below answers are very interesting, and helpful. - ciphor
[+167] [2009-10-21 16:18:31] Nate Kohl [ACCEPTED]

I've done some work for Toyota that involved using neural networks to predict when a driver was about to crash [1].

We used a neuroevolution algorithm [2] called NEAT [3] to evolve networks that converted either sonar, laser rangefinder, or CCD camera input into a warning signal. The warning signal was then provided to the driver, with the goal of helping them avoid dangerous situations.

Here are some examples of different sensor modalities that we used in both simulation and on a small four-wheeled robot:

simulated laser rangefinder and sonar used in simulation warnings as a robot drives around an office

The results were quite successful; the evolved networks were able to use relatively low-fidelity input to make surprisingly good predictions of imminent danger [4].

The NEAT algorithm has also been used on a variety of other problems (including some awesome work in video games [5]) and has been implemented in many modern languages [6].

[1] http://nn.cs.utexas.edu/keyword?vehicle-warning
[2] http://en.wikipedia.org/wiki/Neuroevolution
[3] http://www.cs.ucf.edu/~kstanley/neat.html
[4] http://nn.cs.utexas.edu/pages/research/neat-warning/
[5] http://www.youtube.com/watch?v=QiBOk6ar1mg&feature=player%5Fembedded
[6] http://www.cs.ucf.edu/~kstanley/neat.html#intro
[7] http://nn.cs.utexas.edu/?kohl%3Agecco06
[8] http://eplex.cs.ucf.edu/publications/2009/hastings.cig09.html
[9] http://nn.cs.utexas.edu/keyword?stanley%3Acig05

(17) Very interesting. - David
(1) +1 Really interesting stuff, especially the NEAT algorithms. That link will have my spare time for breakfast ;) - pyrocumulus
(2) +1 I also found that really interesting. Many thanks - Basic
(1) The NEAT algorithms are really cool. A bit difficult to implement (essentially tracking the history of a mutation, IIRC, and then using the history in the crossover), but quite biologically sound. - jamesh
I have used NEAT too. There is a Java Implementation anji.sourceforge.net in case someone is interested. I extended the active vision system that comes with this package as part of my class project. - A. K.
That's pretty "neat". - britney
1
[+52] [2009-11-24 11:18:56] Paul Lammertsma

In 2007 I was part of a group of master students put to the task of classifying ground (vs. buildings, cars, trees, etc.) in a photograph.

The project was focused on image processing and understanding, where the task was to attempt to extrapolate parts of panoramic 360° photographs. For example, we would take the photograph below (taken with a customized vehicle) and attempt to discover the ground cover (i.e. the road, sidewalk, etc.) in the photo.

Panoramic photograph of a street in Utrecht

If we extrapolate the ground plane of the previous image by hand, we would probably agree upon an image resembling:

Manual segmentation of a panoramic photograph

We can then consider this the ground truth.

The application our research group developed, Ground Plane Classification (GPC) uses a six-step taxonomy (proposed by M. Egmont-Petersen et al., 2002) composing of: pre-processing, data reduction, segmentation, object detection and image understanding (and optimization throughout). The classification occurs in the image understanding phase, which features a Feed Forward Artificial Neural Network specially trained using a training set of panoramic photographs.

Our results typically give a margin of error of about 3 to 4%. The automatically classified image below boasts an error rate of only 1.1%.

Automatic segmentation of a panoramic photograph

Originally, we planned on taking GPS coordinates into account, but that didn't work out in the end as (a) they aren't accurate enough and (b) we don't have a map that resembles structures in the desired detail.

Feel free to read more about it [1]!

[1] http://paul.luminos.nl/document/393

2
[+24] [2009-10-18 20:56:39] KushalP

Surprised nobody has chimed in with this one yet, but I used an artificial neural network to attempt to predict the financial markets ( FOREX [1]) for my final year dissertation. Did it mainly as a bit of fun, but found that I was able to get around 55-65% accuracy.

It's worth noting that neural networks are great for regressions analysis [2], in this case it was tied in with a graphing solution so I could see the live plotted results of the system. Sadly it was only able to get the above results for the next 10 pips [3] after around 2 weeks of training with 10 years worth of data.

In terms of the libraries used it was a mixture of solutions which I ended up cherry picking and combining into my final solution. These involved a mixture of Encog [4], Joe Huwaldt's Nueral Network Package [5] and PyMl [6] blended together into my own Java solution.

F.Y.I. I found Joe's solution to be the easiest to prototype different neural network structures with quickly due to its simple construction and the two examples provided.

If I could start again I would easily choose to write it in Python, purely due to the sheer number of open source Python projects I found and gained inspiration from. Most of these I had to refactor into Java rather than just use immediately and begin prototyping, wasting some of my implementation time -- although gaining a better understanding of the code.

[1] http://en.wikipedia.org/wiki/Foreign%5Fexchange%5Fmarket
[2] http://en.wikipedia.org/wiki/Regression%5Fanalysis
[3] http://en.wikipedia.org/wiki/Percentage%5Fin%5Fpoint
[4] http://code.google.com/p/encog-java/
[5] http://homepage.mac.com/jhuwaldt/java/Packages/NeuralNets/NeuralNets.html
[6] http://pyml.sourceforge.net/

(34) 55-65% accuracy? You must now be a very wealthy man. - Mick
(5) @Mick Haha, sadly not. I don't have the kind of money required to use a high rate brokerage system and those figures were from one of the currency pairs. Not all of them faired so well. - KushalP
(3) It bears pointing out that the training overhead sounds quite high: "only able to get the [stated] results for the next 10 pips after around 2 weeks of training with 10 years worth of data." - Assad Ebrahim
3
[+21] [2009-10-18 21:06:20] snicker

I've used artificial neural networks to predict the shear strength of reinforced concrete columns, as well as their rotational deformation capacity. This is a problem that has many independent variables and extremely nonlinear results, perfect for ANNs.

I compiled a large database of tests from many sources to train the network and judge its accuracy. Applications of neural networks to solve Civil Engineering problems are not uncommon.

I had used an application called Trajan to do some of this work, but I intend on rolling my own solution and revisiting the problem because I need more control over the data.


(1) This sounds like an awesome application of NN, but I'm surprised you were able to get so much training data for this. Doesn't a "shear strength" data point imply that something has been "broken" to get the data? Or do people in labs spend all day breaking stuff to get the numbers? Just curious... - Caffeine Coma
(6) People actually do spend days, weeks, months, and even years forming and casting concrete specimens in structural engineering labs for the sole purpose of breaking them. Concrete is such a "random" material with so many variables that its behavior is extremely hard to predict, and the only way to verify predictions is with tests. Some of the data I have used is from as far back as 1968. - snicker
4
[+17] [2009-10-23 11:00:44] Mick

I did a PhD in neural networks. In it I solved several problems related to time series. For example I modeled a mechanism for recalling sequences of patterns (rather like remembering a phone number). We do this with a system where the part of the sequence recited so far reminds you of what pattern comes next (that's why its very hard to recall your phone number backwards). My PhD is online here [1].

I wrote all my own simulation software in C.

[1] http://www.reiss.demon.co.uk/misc/m%5Freiss%5Fphd.pdf

5
[+13] [2009-10-18 21:00:04] Doug McClean

I used AForge [1] to decide if multiple-choice answer bubbles had been filled-in, checked, crossed-out, etc.

[1] http://code.google.com/p/aforge/

Why would you want to know how they were marked? - Tim
To then know if the person taking the test had answered the question correctly. - Doug McClean
Oh, I thought you checked whether the answer was checked or crossed-out, which seemed interesting. - Tim
(4) Right, often we found people would answer the question by filling in a bubble (say they chose "c"), and would then decide that the real answer was "b". Then they might erase the "c" bubble and fill in "b", or they might put an X or a slash through "c" and fill in "b", etc. - Doug McClean
6
[+13] [2009-10-24 06:55:26] mjv

Although I've used -with variable success- NN to recognize text patterns (like part number and such), the coolest Neural Net implementation I did was for very simple game which I developed in the context of a challenge/contest for users of Numenta NuPIC framework [1].

I didn't submit this game for the contest, owing to its incomplete user interface and general "roughness around the edges", the neural network portion of the project, however, was functional and worked rather well.

I realize that Numenta's Hierarchical Temporal Memory [2] (HTM) concept implemented in NuPIC (I was using version 1.3 at the time) is somewhat in departure with traditional Neural Network framworks, but it may be worthy of notice in this SO posting.

The game is one where the player has to learn to communicate with a "pet" (or a "alien being"...) implemented as an HTM network. The mean of communication is by exchanging [imperfect] messages drawn on a small square grid, and to "act" accordingly by pressing a particular action button. The idea is to develop a "language" of sorts to express basic concept (food, water, inside, outside, playing, ball, stick, "I need to sleep" etc.) in a consistent fashion and so that the other party understands them.

The Neural Net portion of the project was derived from the image recognition demo which ships with NuPIC, but included a few twists such as the automatic erasing of the dots that make up the image, a certain amount of time after they are drawn, and also the on-going mix-mode learning/recognition, whereby the demo has these two phases well separated.

The interesting part of this project was how it leveraged the extreme resilience to noise and imprecision in the message being submitted for recognition. HTMs are well known for this feature.

Maybe I should rekindle this, again, very basic / geeky, game and provide it, open-source fashion on Numenta's Site or elsewhere. Another project for when I retire ;-)

[1] http://www.numenta.com/about-numenta/numenta-technology-2.php
[2] http://www.numenta.com/about-numenta/numenta-technology.php

7
[+12] [2009-10-18 20:36:00] hplbsh

I used FANN to do audio signal classification (not speech recognition). It was a background toy project mainly to learn about NN. Enjoyed it and learnt a lot; FANN was dead easy to understand and get running (considering I knew nothing beforehand).

Unfortunately there are patents covering a vast range of neural net applications and its easy to fall foul, so bear that in mind if its anything commercial.


(3) Just adding this comment to remember this answer because you cannot star answers, only questions. - clyfe
8
[+12] [2010-06-17 17:28:18] Bradley Powers

I rolled my own Neural Network based autopilot for an autonomous helicopter. It used Cascade correlation [1], and online reinforcement learning [2] (positive reinforcement for things like flying level, being near a GPS waypoint; negative reinforcement for things like crashing, etc.)

[1] http://en.wikipedia.org/wiki/Cascade_correlation_algorithm
[2] http://en.wikipedia.org/wiki/Reinforcement_learning

+1. Sounds interesting. I'll have to read up on cascade correlation. Your answer prompted me to as a related question that you may some insight on: stackoverflow.com/questions/3068658/… - Drew Noakes
(1) BTW how successful was the NN in the end? Did it deal with unexpected issues (gusts of wind, etc) ok? - Drew Noakes
Actually, reinforcement learning looks very useful for what I'm trying to do. Thanks for the tip. - Drew Noakes
The NN was actually incredibly effective, it was able to deal with crazy Boston winds, being pushed, etc. Reinforcement learning is incredibly useful, particularly for being relatively simple. You can get great results from it if you're careful with how you structure the reinforcement. Sometimes you have to be clever, as measuring how good or bad the autopilot is performing can be difficult (In my case, the issue was with GPS performance, in that it tends to drift randomly, regardless of the movement of the sensor). - Bradley Powers
9
[+8] [2009-11-24 11:10:12] Gregory Pakosz

I'm using neural networks at the heart of a proprietary library that performs handwriting recognition on pen based input

EDIT: and now some shameless plug [1]

[1] http://webdemo.visionobjects.com/

(1) Do you have any suggestions for Chinese. How to build something like this nciku.com - s84
(1) Man I tried to write with my horrible handwriting and a mouse. It worked 100%. What kind of wizardry it is? My Samsung phone doesn't do that good job even thou I use a stylus or my finger! - lukas.pukenis
10
[+6] [2009-10-19 20:39:52] Ade Miller

My PhD was investigating using ANNs for a couple of different image processing related problems. The abstract and references to publications are here:

http://www.ademiller.com/tech/reports/thesis%5Fabstract.htm http://www.ademiller.com/blogs/tech/about-2/

The work addressed image processing and data analysis problems using back propagation networks and Kohonen self organizing maps:

1) Classifying objects on an image taken using a large astronomical telescope. Objects are either stars (point like and spherical) or galaxies (diffuse and axially assymetric). A system based on a back propagation network with image pre-processing achieved comparable results to other existing methods.

2) Characterising images of chest cavities captured by an electrical impedence tomography system. The goal was to automatically estimate lung volume using a neural network. This was shown to work with simulated data but never proven in a clinical trial.

3) Reconstruct incoming gamma ray trajectories from events within a gamma ray telescope. This is a very hard problem to solve my any means and ANNs didn't produce good results.

I used a code called PlaNet and some home grown code I wrote as part of my work. Looks like it's still around:

http://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/areas/neural/systems/planet/0.html

As is Kohonen's software for maps:

http://www.cis.hut.fi/somtoolbox/theory/somalgorithm.shtml


11
[+6] [2009-10-21 15:26:54] TNT

We (I was on a team) that used neural networks for sound recognition (think underwater and land based miltary issues). Tracking of items that are moving through time and space. All very cool stuff. We built our own hardware and software that blew away the speed of traditional computers. We also worked most of the traditional NN problems.

When you get down to it, NN's just need to multiply really really fast.


12
[+6] [2009-10-23 11:33:56] John

My thesis was about the prediction and evaluation of textile elasticity from an industrial robot (RV-4A) under tension. At first, about 20 different textile pieces were evaluated by an expert and were assigned an empirical elasticity value in the range of 0 to 1. Then, the robot performed a tension test of all these pieces, and the elasticity curve (elongation-force) was recorded. Those data were then fed in a neural network, with the purpose to be able to predict the textile's empirical elasticity value, as soon as possible, and without being fed the whole curve. The final trained neural network was used be the robot to classify in Real Time fabrics and textiles without finishing the tensile test, something that modelled the real empirical way of classifying textiles by an expert. The results were very good. For more info see

Intelligent evaluation of fabrics' extensibility from robotized tensile test, Panagiotis N. Koustoumpardis, John S. Fourkiotis, Nikos A. Aspragathos, International Journal of Clothing Science and Technolory, Year:2007 Volume:19 Issue:2 Page:80 - 98

I didn't use any NN library, the algorithms used were Resilient Propagation, and LM, and were programmed from scratch.


13
[+6] [2011-01-07 17:42:10] ghoul2

About 5 years ago I used a modest sized fixed point neural network running on a video processing (DSP) board to recognize the channel logos on incoming video feed (the incoming video feed flipped to various channels somewhat randomly). The application was real, though it will have to remain un-explained here :(.

I was rather proud of the entire thing as I implemented it in ~3 days (the client needed a solution "yesterday") end to end, including making a neural net trained using a floating-point implementation run fast on a fixed-point DSP.


14
[+4] [2009-10-23 13:46:40] Nick Luchsinger

I used MATLAB's Neural Networks Toolbox [1] for some rudimentary handwritten-digit classification. My impression is that the Neural Networks Toolbox is very powerful/useful, but the 900-something page documentation is a real beast (both for size and quality), so I can't say I enjoyed working with it a whole lot.

[1] http://www.mathworks.com/products/neuralnet/

15
[+4] [2009-10-23 14:54:26] Sean McCauliff

About 12 years ago, working at CEDAR [1], I used SNNS [2] to train a fast machine print alphanumeric character recognizer and to discriminate between hand written envelopes and machine printed ones. Matlab was useful to generate different kinds of image features.

Actually writing neural network code is not difficult. Usually the kinds of problems where you want to use NN are very data driven. The issues are more about playing with the data, generating useful features and finding statistical models that work. The software issues are more about efficiently generating features and executing models.

[1] http://www.cedar.buffalo.edu/
[2] http://www.ra.cs.uni-tuebingen.de/SNNS/

16
[+4] [2011-01-07 17:39:20] tgflynn

I worked on recognition systems for print and handwritten text that used neural networks (specifically multi-layer perceptrons) as classifiers for individual characters. These were used in a commercial system that is in very large scale use.

I also developed a convolutional neural net implementation for these types of applications. The code was all developed in house by me or others.


17
[+2] [2009-10-17 12:39:45] AndreyAkinshin

I know 3 good Neural-Network libraries:

  1. AForge [1] is my favorite. It is written in .NET (there is a CodeProject article [2] on AForge)

  2. The good and the old library is FANN [3]

  3. NeuronDotNet [4] is another good library.

Also there is more information about Neural Net from these articles on CodeProject [5]

[1] http://code.google.com/p/aforge/
[2] http://www.codeproject.com/KB/recipes/aforge%5Fneuro.aspx
[3] http://leenissen.dk/fann/
[4] http://neurondotnet.freehostia.com/index.html
[5] http://www.codeproject.com/info/search.aspx?artkw=Neural+Net&vidlst=81%2c64%2c65%2c69&sa%5Fao=False&sa%5Fso=17&sa%5Fas=1%2c3&pgnum=2

(8) what problems did you solve using ANN? - Hannson
18
[+2] [2010-02-18 06:31:27] Padu Merloti

I've implemented a system for Iris Recognition using backpropagation neural networks in Matlab. I did use the matlab ANN toolbox and if you're interested, you can download [1] a brief paper I wrote describing my experiences.

The paper includes not only material on the EBNN but also on the other techniques used in the whole system, like image segmentation and feature extraction using independent component analysis (ICA).

[1] http://www.merlotti.com/EngHome/Computing/iris_rec_nn.pdf

19
[+2] [2010-12-04 10:02:02] Panagiotis Panagi

Utilized neural networks in my PhD. for (adaptively) approximating unknown functions, and using these results for controlling a network of interconnected systems (like for example power distribution network, swarm of mobile robots, etc.)

In an interconnected system, we typically don't know (or it is to difficult to compute) how each subsystem affects the other subsystems. For example, how the power demand in one area affects another area. NNs for the rescue, they provide an estimate for this interconnection effect. This is achieved by constructing an adaptive algorithm that estimates online the weights of a NN, in a way that the approximation error is minimized.


20
[+2] [2011-01-20 14:48:24] Predictor

I've used neural networks any number of times. One that comes to mind was an industrial quality prediction project I did for a manufacturing company. The company manufactured metal parts (ball bearings), and could run the parts through different phases of a particular finishing process for different amounts of time. My job was to predict how long this was take (that was easy), and what the part quality measurements would be when the process completed.

I used a commercial tool to fit a polynoial network to their data. The network accepted as input the machine process settings and the material type being worked.


(1) Did it work well? was it a successful story/happy end? are some lessons learned from this usage? - lmsasu
Yes, it did work well. One thing which certainly helped was that the client company's statistician had designed the training data. Rather than just sample items coming out of the factory, he produced a schedule of combinations of specific input values to use, maximizing the information we'd get from our analysis. - Predictor
21