Abstract
One type of machine learning, called memory-based learning, which is characterized by memorization of observed events, has been investigated. Given an input vector, some data are retrieved from a memory and fed to an output computational function. Existing memory-based schemes were usually introduced, discussed, and treated as different, unrelated techniques. In this paper, we unify them into one category. A Memory-Based Learning Structure (MBLS) receives training samples and stores data at memory locations associated with selected (fired) vertices, which are assigned points in the input domain. Memory-based learning that uses a structured vertex distribution is categorized as a tabular approach. One merit of a tabular MBLS is that the size of allocated memory can be pre-determined and much smaller than the number of available observations. Another advantage is that the training process filters out noise automatically and makes the MBLS less sensitive. Since data are stored in memory directly addressable by queries, data retrieving is fast and memorization of vertex locations is unnecessary.