Research Paper No. 1549
Information Sharing in a Supply Chain
L. and Seungjin
https://gsbapps.stanford.edu/researchpapers/library/rp1549.pdf
Information Sharing in a Supply Chain
L. and Seungjin
https://gsbapps.stanford.edu/researchpapers/library/rp1549.pdf
Summary
Advances in information system technology have had a huge impact on the evolution of supply chain management. As a result of such technological advances, supply chain partners can now work in tight coordination to optimize the chain-wide performance, and the realized return may be shared among the partners. A basic enabler for tight coordination is information sharing, which has been greatly facilitated by the advances in information technology. This paper describes the types of information shared: inventory, sales, demand forecast, order status, and production schedule. We discuss how and why this information is shared using industry examples and relating them to academic research. We also discuss three alternative system models of information sharing – the Information Transfer model, the Third Party Model, and the Information Hub Model.
Evaluation
This paper was all about information sharing. The Abstract was brief and precise. The paper did not follow the standard format that I know, it has its own format to express and explain well the model, types, and constraints of information sharing.
The paper is organized as follows. Section1 was the Introduction Section 2 describes the types of information shared and the associated benefits. Section 3 discusses alternative system models to facilitate information sharing. Section 4 addresses the challenges of information sharing.
Regarding the presentation of the paper it is not well arranged, the survey results were on the last part of the paper. While the references was on the upper part. The paper uses many examples to illustrate each model of information sharing and types of shared information.
The paper is organized as follows. Section1 was the Introduction Section 2 describes the types of information shared and the associated benefits. Section 3 discusses alternative system models to facilitate information sharing. Section 4 addresses the challenges of information sharing.
Regarding the presentation of the paper it is not well arranged, the survey results were on the last part of the paper. While the references was on the upper part. The paper uses many examples to illustrate each model of information sharing and types of shared information.
Wait-free Programming for General Purpose Computations on Graphics Processors
http://www.cs.chalmers.se/~tsigas/papers/Wait-Free-GPGPU-IPDPS08.pdf
http://www.cs.chalmers.se/~tsigas/papers/Wait-Free-GPGPU-IPDPS08.pdf
Summary
This paper aims at bridging the gap between the lack of synchronization mechanisms in recent GPU architectures and the need of synchronization mechanisms in parallel applications. Based on the intrinsic features of recent GPU architectures, the researchers construct strong synchronization objects like wait-free and t-resilient read-modify-write objects for a general model of recent GPU architectures without strong hardware synchronization primitives like test-andset and compare-and-swap. Accesses to the wait-free objects have time complexity O(N), whether N is the number of processes. The fact that graphics processors (GPUs) are today's most powerful computational hardware for the dollar has motivated researchers to utilize the ubiquitous and powerful GPUs for general-purpose computing. Recent GPUs feature the single-program multiple-data (SPMD) multicore architecture instead of the single-instruction multiple-data (SIMD). However, unlike CPUs, GPUs devote their transistors mainly to data processing rather than data caching and flow control, and consequently most of the powerful GPUs with many cores do not support any synchronization mechanisms between their cores. This prevents GPUs from being deployed more widely for general-purpose computing.
Evaluation
This paper was more on the algorithms, at first look it is really complicated, but it is well explained by the figures and formula on how they come up with it to have the desired results. I also noticed that they used statements like if, if else statement and also for loop.
The result demonstrates that it is possible to construct wait-free synchronization mechanisms for graphics processors (GPUs) without the need of strong synchronization primitives in hardware and that wait-free programming is possible for graphics processors (GPUs). Most of the paper content was algorithms of process.
The result demonstrates that it is possible to construct wait-free synchronization mechanisms for graphics processors (GPUs) without the need of strong synchronization primitives in hardware and that wait-free programming is possible for graphics processors (GPUs). Most of the paper content was algorithms of process.
Map-Reduce for Machine Learning on Multicore
http://www.cs.stanford.edu/people/ang//papers/nips06-mapreducemulticore.pdf
http://www.cs.stanford.edu/people/ang//papers/nips06-mapreducemulticore.pdf
Summary
We are at the beginning of the multicore era. Computers will have increasingly many cores (processors), but there is still no good programming framework for these architectures, and thus no simple and unified way for machine learning to take advantage of the potential speed up. In this paper, we develop a broadly applicable parallel programming method, one that is easily applied to many different learning algorithms. Our work is in distinct contrast to the tradition in machine learning of designing (often ingenious) ways to speed up a single algorithm at a time. Specifically, we show that algorithms that fit the Statistical Query model [15] can be written in a certain “summation form,” which allows them to be easily parallelized on multicore computers. We adapt Google’s map-reduce [7] paradigm to demonstrate this parallel speed up technique on a variety of learning algorithms including locally weighted linear regression (LWLR), k-means, logistic regression (LR), naive Bayes (NB), SVM, ICA, PCA, gaussian discriminant analysis (GDA), EM, and backpropagation (NN). Our experimental results show basically linear speedup with an increasing number of processors.
Evaluation
The paper focuses on developing a general and exact technique for parallel programming of a large class of machine learning algorithms for multi-core processors. The Abstract was brief and precise. The paper follows the standard format. Also use graph, formula and statistical models that are easy to understand. The paper shows a good theoretical computational complexity results.