Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
340 views
in Technique[技术] by (71.8m points)

bigdata - Machine Learning & Big Data


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

First of all, your question needs to define more clearly what you intend by Big Data.

Indeed, Big Data is a buzzword that may refer to various size of problems. I tend to define Big Data as the category of problems where the Data size or the Computation time is big enough for "the hardware abstractions to become broken", which means that a single commodity machine cannot perform the computations without intensive care of computations and memory.

The scale threshold beyond which data become Big Data is therefore unclear and is sensitive to your implementation. Is your algorithm bounded by Hard-Drive bandwidth ? Does it have to feet into memory ? Did you try to avoid unnecessary quadratic costs ? Did you make any effort to improve cache efficiency, etc.

From several years of experience in running medium large-scale machine learning challenge (on up to 250 hundreds commodity machine), I strongly believe that many problems that seem to require distributed infrastructure can actually be run on a single commodity machine if the problem is expressed correctly. For example, you are mentioning large scale data for retailers. I have been working on this exact subject for several years, and I often managed to make all the computations run on a single machine, provided a bit of optimisation. My company has been working on simple custom data format that allows one year of all the data from a very large retailer to be stored within 50GB, which means a single commodity hard-drive could hold 20 years of history. You can have a look for example at : https://github.com/Lokad/lokad-receiptstream

From my experience, it is worth spending time in trying to optimize algorithm and memory so that you could avoid to resort to distributed architecture. Indeed, distributed architectures come with a triple cost. First of all, the strong knowledge requirements. Secondly, it comes with a large complexity overhead in the code. Finally, distributed architectures come with a significant latency overhead (with the exception of local multi-threaded distribution).

From a practitioner point of view, being able to perform a given data mining or machine learning algorithm in 30 seconds is one the key factor to efficiency. I have noticed than when some computations, whether sequential or distributed, take 10 minutes, my focus and efficiency tend to drop quickly as it becomes much more complicated to iterate quickly and quickly test new ideas. The latency overhead introduced by many of the distributed frameworks is such that you will inevitably be in this low-efficiency scenario.

If the scale of the problem is such that even with strong effort you cannot perform it on a single machine, then I strongly suggest to resort to on-shelf distributed frameworks instead of building your own. One of the most well known framework is the MapReduce abstraction, available through Apache Hadoop. Hadoop can be run on 10 thousands nodes cluster, probably much more than you will ever need. If you do not own the hardware, you can "rent" the use of a Hadoop cluster, for example through Amazon MapReduce.

Unfortunately, the MapReduce abstraction is not suited to all Machine Learning computations. As far as Machine Learning is concerned, MapReduce is a rigid framework and numerous cases have proved to be difficult or inefficient to adapt to this framework:

– The MapReduce framework is in itself related to functional programming. The Map procedure is applied to each data chunk independently. Therefore, the MapReduce framework is not suited to algorithms where the application of the Map procedure to some data chunks need the results of the same procedure to other data chunks as a prerequisite. In other words, the MapReduce framework is not suited when the computations between the different pieces of data are not independent and impose a specific chronology.

– MapReduce is designed to provide a single execution of the map and of the reduce steps and does not directly provide iterative calls. It is therefore not directly suited for the numerous machine-learning problems implying iterative processing (Expectation-Maximisation (EM), Belief Propagation, etc.). The implementation of these algorithms in a MapReduce framework means the user has to engineer a solution that organizes results retrieval and scheduling of the multiple iterations so that each map iteration is launched after the reduce phase of the previous iteration is completed and so each map iteration is fed with results provided by the reduce phase of the previous iteration.

– Most MapReduce implementations have been designed to address production needs and robustness. As a result, the primary concern of the framework is to handle hardware failures and to guarantee the computation results. The MapReduce efficiency is therefore partly lowered by these reliability constraints. For example, the serialization on hard-disks of computation results turns out to be rather costly in some cases.

– MapReduce is not suited to asynchronous algorithms.

The questioning of the MapReduce framework has led to richer distributed frameworks where more control and freedom are left to the framework user, at the price of more complexity for this user. Among these frameworks, GraphLab and Dryad (both based on Direct Acyclic Graphs of computations) are well-known.

As a consequence, there is no "One size fits all" framework, such as there is no "One size fits all" data storage solution.

To start with Hadoop, you can have a look at the book Hadoop: The Definitive Guide by Tom White

If you are interested in how large-scale frameworks fit into Machine Learning requirements, you may be interested by the second chapter (in English) of my PhD, available here: http://tel.archives-ouvertes.fr/docs/00/74/47/68/ANNEX/texfiles/PhD%20Main/PhD.pdf

If you provide more insight about the specific challenge you want to deal with (type of algorithm, size of the data, time and money constraints, etc.), we probably could provide you a more specific answer.

edit : another reference that could prove to be of interest : Scaling-up Machine Learning


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...