Imagine for a moment a two-dimensional array of simple, active, single-purpose computer processing elements, each of which integrates one or more inputs of differing intensity into a single output signal, which is routed in parallel to one or more places. Imagine that each output branch connection has an adjustable, resistor-like device to attenuate the strength of the signal on that particular leg. This layer of processors is stacked on a second, similar layer with the weighted outputs from the first layer routed to the inputs of the second, whose weighted outputs serve, in turn, as inputs for still another layer beyond, and so forth. Imagine being able to train the thousands of resistor-like devices in a way that will cause a unique input pattern fed into the first layer to result in a desired output pattern emerging from the final layer.
That's the general idea behind a neural network, a data processing approach that can be adapted to address several classes of problems. The input pattern corresponds to your raw data, while the output pattern represents the answer you seek.
According to the research, neural networks have identified anomalies in production lines, found ideal staffing and budgeting levels, diagnosed malfunctions in a maintenance environment and smoothed out the response of nonlinear control systems -- all applications that tolerate a level of imprecision in the final number.
Join me on this month's dive into the morass we call the Web to search out zero-cost, non-commercial, registration-free resources aimed at providing practical information about this form of artificial intelligence. Remember, our mission is to search the Web so you don't have to.
A better view
Someone at Yale University, whose identity I was unable to discern, posted to the Web a set of class notes from a course on fractal geometry. Among the material is a visual conceptualization of a simple, three-layer neural net that will, I hope, bring some cogency to my description above. Aim your browser at http://classes.yale.edu/Fractals/CA/NeuralNets/NeuralNets.html for the graphic and a brief but complicated explanation of how those interlayer weightings are adjusted - or trained - to make the device respond properly.
In general, the idea behind training is to have a sufficiently large set of known inputs and corresponding correct outputs. In theory, the process is simple -- feed an input, compare the computed output to the correct output, use the error to update the weightings in the interlayer connections, and repeat for each input. Then feed an untested input to get its correct output. Intelegen Inc., Troy, Mich., explains it more elegantly at http://brain.web-us.com/brain/neur_train.html.
The scholarly approach
Genevieve Orr, associate professor and chair of the Department of Computer Science at Willamette University in Salem, Ore., along with Nici Schraudolph and Fred Cummins have prepared a set of lecture notes for CS-449: Neural Networks, a course that was offered in fall 1999, nearly five years ago. That's a relatively long time for something to be posted on the Web -- it could evaporate at any time. Because the content in Orr's notes is worth your time, I'd recommend checking it out while it's still available. Take a walk over to http://www.willamette.edu/~gorr/classes/cs449/intro.html to examine the offering.
If you start by clicking on the first bullet point in Lecture 1, read to the bottom of the page and click on the next link in the chain, and so on. Soon you'll learn how a neural net can help with linear regression. The example given is to determine the relationship between an automobile's weight and its rate of gasoline consumption, given that other variables also are in play -- engine displacement, number of cylinders, horsepower, model year and acceleration.
A second example explores the relationship between NOx emissions and the concentration of ethanol in gasoline. Both examples involve questions that are not far removed from what a plant engineer or plant manager might ask concerning something happening on the factory floor. Exploring beyond these two examples, however, will mire you in the site's heavy-duty mathematics that might be a bit too much to absorb easily.
So, let's take the easy route and let the computer do the math. If you're the sort of person with a modicum of intellectual curiosity, no doubt you'd like to get your hands on some freebie neural network software to try it out on a real-world problem occurring right there in your very own plant. Heck, if you play it right, at the least you could become a local hero and, perhaps, a corporate legend in your own time.
Well, I'm here to show you exactly where the freebies are buried. Give ol' mousie a shovel and have it dig around at http://www.faqs.org/faqs/ai-faq/neural-nets/, the location of a rather extensive, multi-part FAQ on neural networks. Scroll down the page and click on any of the links you see. But, if you want to cut to the chase, go to Part 5 of 7, where Warren S. Sarle at the SAS Institute Inc. in Cary, N.C., maintains the section on free software.
The true techno-geeks out there in readerland may be interested in the original source code for no less than 33 neural network programs written in C++, Java, Pascal and FORTRAN. Some of it may be a bit buggy, so use them at your own risk.
However, immediately below those links, the rest of us non-geeks will find no less than 44 neural nets available for download. These operate on a variety of platforms, not necessarily a PC, so read carefully before you start downloading material. Watch the disclaimers. Some have owner's manuals, online help files and e-mail links for use in asking technical questions.
If that still seems a bit too complex, did you know that Excel, the number-crunching monster from the Microsoft suite of office software, can be endowed with a degree of intelligence that exceeds the innate smartness you'll find in a hunk of common silicon? Yes, my friends, it is possible to run a neural net on that box sitting on your desk.
Thanks to the good folks at Alyuda Research Inc. in Kharkov, Ukraine, you can download Forecaster XL, the company's software package that should enhance your analytical prowess. Dispatch your most trustworthy mouse on a secret mission to the former Soviet Union to retrieve a copy that operates for 30 days before it goes up in smoke a la "Mission Impossible." To complete the mission, your mouse will need these coordinates: http://www.alyuda.com/forecasting-tool-for-excel.htm?g2.
Linus Torvalds of Linux fame isn't the only Finn in the software game. The Finnish University Network has linked to all the neural network software and papers it has been able to find in the public domain. This FTP archive site is a real treasure trove, which you'll find at ftp.funet.fi. Note: Enter the URL as listed. Don't append it to the common http://www. When you get there, check out the readme files first to get your bearings (you're in Finland, after all). Then aim that pesky desk rodent at a directory called /pub/sci/neural, where you'll find the collection of material about neural nets.
By the way, ftp.funet.fi claims to be the largest FTP archive in the world on all subjects, not just neural nets. That sounds like a boast that demands some investigation. After you poke around in the neural net section, start clicking away at random to turn up some remarkable information. I'd be downright disappointed if you didn't find something that has some immediate relevance in your life.