Tuesday, October 13, 2009

what's the difference between "METHOD" and "METHODOLOGY"?

In my own understanding. I think methodology is a collection of theories, concepts or ideas.
The difference of method and methodology is method is just a sequence of commands while methodology is a set of methods. Method is only part of the methodology.

Choose at least one post/article of your classmates and make a review.

Choose at least one post/article of your classmates and make a review.

From the assignment 7, I choose one article that was posted by Ms. Cherry Ann Montejo, the article was "Usability and Open Source Software"

Usability and Open Source Software
David M. Nichols and Michael B. Twidale*
Department of Computer Science
University of Waikato, Hamilton New Zealand


Summary
The study reviews the existing evidence of the usability of open source software and discusses how the characteristics of open-source development influence usability. It describes how existing human computer interaction techniques can be used to leverage distributed networked communities of developers and users to address issues of usability.

Evaluation
When I was half way reading this paper, I was thinking that the development of open-source software overlooked the importance of good usability. But when I was done reading, it made me conclude that open source community was just increasing its awareness of the usability issues. Improvements in the usability of open source software do not necessarily mean that such software will displace proprietary software from the desktop. Improved usability is a necessary condition for such a spread of factors involved.


Summary:
The open source community is increasing its awareness of usability issues. This paper has identified certain barriers to usability and explored how these are being and can be addressed. Several of the approaches directly mirror the problems identified. The automated evaluation where there is a shortage of human expertise and encourage various kinds of end user and usability expert participation to re-balance the development community for the average users. If traditional OSS development is about scratching a personal itch, usability is about being aware of and concerned about the itches of others. Deeper investigation of the issues outlined in this paper could take various forms. One of the great advantages of OSS development is that its process is to a large extent visible and recorded. A study of the archives of projects will enable a verification of the claims and hypotheses ventured here, as well as the uncovering of a richer understanding of the nature of current usability discussions and development work. Example questions include: “Do certain types of usability issues figure disproportionately in discussions and development effort, leaving others ignored or underdeveloped?”, “What distinguishes OSS projects that are especially innovative in their functionality and interface designs?” and “Can interface design consistency be preserved in the traditional modular OSS development environment?” The approaches outlined in the previous section need further investigation and indeed experimentation to see if they can be feasibly used in OSS projects, without disrupting the factors that make traditional functionality-centric OSS development so effective. These approaches are not necessarily restricted to OSS; several can be applied to proprietary software. Indeed the ideas derived from discount usability engineering and participatory design originated in developing better proprietary software. However, they may be even more appropriate for open source development in that they map well on to the strengths of a volunteer developer community with open discussion. Most HCI research has concentrated on pre-release activities that inform design and relatively little on post-release techniques (Hartson and Castillo, 1998; Smilowitz et al., 1994). It is noteworthy that participatory design is a field in its own right whereas participative usage is usually quickly passed over by HCI textbooks. Thus OSS development in this case need not merely play catch-up with the greater end user focus of the commercial world, but potentially can innovate in exploring how to involve end users in subsequent redesign. There have been several calls in the literature (Shneiderman 2002; Lieberman and Fry, 2001; Fischer, 1998) for users to become more involved in software development beyond standard user-centred design activities (such as usability testing, prototype evaluation and fieldwork observation). It is noticeable that these comments seem to ignore that this involvement is already happening in OSS projects. Raymond (1998) comments that “debugging is parallelizable”, we can add to this that usability reporting, analysis and testing is also parallelisable. However certain aspects of usability design do not appear to be so easily parallelisable. We believe that these issues should be the focus of future research and development, understanding how they have operated in successful projects and designing and testing technological and organisational mechanisms to enhance future parallelisation.
Improvements in the usability of open source software do not necessarily mean that such software will displace proprietary software from the desktop; there are many other factors involved, for example the inertia, support, legislation, legacy systems and many more. However improved usability is a necessary condition for such a spread. Lieberman and Fry (2001) foresee that ‘interacting with buggy software will be a cooperative problem solving activity of the end user, the system, and the developer.’ For some open source developers this is already true, expanding this situation to (potentially) include all of the end-users of the system would mark a significant change in software development practices. There are many techniques from HCI that can be easily and cheaply adopted by open source developers. Additionally there are several approaches that seem to provide a particularly good fit with a distributed networked community of users and developers. If open source projects can provide a simple framework for users to contribute non-technical information about software to the developers then they can leverage and promote the participatory ethos amongst their users. Raymond (1998) proposed that ‘given enough eyeballs all bugs are shallow.’ For seeing usability bugs, the traditional open source community may comprise the wrong kind of eyeballs. However it may be that by encouraging greater involvement of usability experts and end users it is the case that: given enough user experience reports all usability issues are shallow. By further engaging typical users into the development process OSS projects can create a networked development community that can do for usability what it has already done for functionality and reliability.

Evaluation:
Improved usability is a necessary condition for such a spread of factors involved like inertia, support, legislation and legacy systems. I agree to what Ms. Cherry Ann Montejo and to this research paper concluded that open source community was just increasing its awareness of the usability issues. And also improvements in the usability of open source software do not necessarily mean that such software will displace proprietary software from the desktop.

References
http://opensource.mit.edu/papers/nicholstwidale1.pdf

published scientific papers

Wait-free Programming for General Purpose Computations on Graphics Processors
http://www.cs.chalmers.se/~tsigas/papers/Wait-Free-GPGPU-IPDPS08.pdf



Summary
This paper aims at bridging the gap between the lack of synchronization mechanisms in recent GPU architectures and the need of synchronization mechanisms in parallel applications. Based on the intrinsic features of recent GPU architectures, the researchers construct strong synchronization objects like wait-free and t-resilient read-modify-write objects for a general model of recent GPU architectures without strong hardware synchronization primitives like test-andset and compare-and-swap. Accesses to the wait-free objects have time complexity O(N), whether N is the number of processes. The fact that graphics processors (GPUs) are today's most powerful computational hardware for the dollar has motivated researchers to utilize the ubiquitous and powerful GPUs for general-purpose computing. Recent GPUs feature the single-program multiple-data (SPMD) multicore architecture instead of the single-instruction multiple-data (SIMD). However, unlike CPUs, GPUs devote their transistors mainly to data processing rather than data caching and flow control, and consequently most of the powerful GPUs with many cores do not support any synchronization mechanisms between their cores. This prevents GPUs from being deployed more widely for general-purpose computing.

Evaluation
The result demonstrates that it is possible to construct wait-free synchronization mechanisms for graphics processors (GPUs) without the need of strong synchronization primitives in hardware and that wait-free programming is possible for graphics processors (GPUs). Most of the paper content was algorithms of process.This paper was more on the algorithms, at first look it is really complicated, but it is well explained by the figures and formula on how they come up with it to have the desired results. I also noticed that they used statements like if, if else statement and also for loop.






Summary
We are at the beginning of the multicore era. Computers will have increasingly many cores (processors), but there is still no good programming framework for these architectures, and thus no simple and unified way for machine learning to take advantage of the potential speed up. In this paper, we develop a broadly applicable parallel programming method, one that is easily applied to many different learning algorithms. Our work is in distinct contrast to the tradition in machine learning of designing (often ingenious) ways to speed up a single algorithm at a time. Specifically, we show that algorithms that fit the Statistical Query model [15] can be written in a certain “summation form,” which allows them to be easily parallelized on multicore computers. We adapt Google’s map-reduce [7] paradigm to demonstrate this parallel speed up technique on a variety of learning algorithms including locally weighted linear regression (LWLR), k-means, logistic regression (LR), naive Bayes (NB), SVM, ICA, PCA, gaussian discriminant analysis (GDA), EM, and backpropagation (NN). Our experimental results show basically linear speedup with an increasing number of processors.

Evaluation
This paper uses graph, formula and statistical models that are easy to understand. The paper shows a good theoretical computational complexity results. The paper focuses on developing a general and exact technique for parallel programming of a large class of machine learning algorithms for multi-core processors. The Abstract was brief and precise. The paper follows the standard format.




Summary
Advances in information system technology have had a huge impact on the evolution of supply chain management. As a result of such technological advances, supply chain partners can now work in tight coordination to optimize the chain-wide performance, and the realized return may be shared among the partners. A basic enabler for tight coordination is information sharing, which has been greatly facilitated by the advances in information technology. This paper describes the types of information shared: inventory, sales, demand forecast, order status, and production schedule. We discuss how and why this information is shared using industry examples and relating them to academic research. We also discuss three alternative system models of information sharing – the Information Transfer model, the Third Party Model, and the Information Hub Model.

Evaluation
This paper was all about information sharing. The Abstract was brief and precise. The paper did not follow the standard format that I know, it has its own format to express and explain well the model, types, and constraints of information sharing. The paper is organized as follows. Section1 was the Introduction Section 2 describes the types of information shared and the associated benefits. Section 3 discusses alternative system models to facilitate information sharing. Section 4 addresses the challenges of information sharing. Regarding the presentation of the paper it is not well arranged, the survey results were on the last part of the paper. While the references was on the upper part. The paper uses many examples to illustrate each model of information sharing and types of shared information.

Monday, October 5, 2009

Identify and discuss key factors for publishing research in top-tier journals like CSP

CSP is devoted to problems of Central and Eastern Europe. It is a forum for scholars from a range of disciplines: language and linguistics, literature, history, political science, sociology, economics, anthropology, geography and the arts. This is the only interdisciplinary scholarly outlet for Slavists in Canada and one of the major journals in the field in North America. It has an international readership and subscribers.

All articles submitted to CSP are subject to blind refereeing. Only previously unpublished manuscripts that are not under consideration by another journal are considered.

***The maximum length for articles inclusive of bibliography, notes, tables etc. should be 8000 words, and for Commentary & Issues 4000 words, including notes and references. The typescript should be carefully checked for errors before it is submitted for publication. Authors are responsible for the accuracy of quotations, for supplying complete and correct references, and for obtaining permission where needed to cite another person's material. Papers that significantly exceed these limits may be returned to the contributor for editing before being considered.
*** Papers are expected to be accessible and jargon-free. Lengthy quotations of more than 40 words should be indented; shorter quotes should be retained within the body of the text. Authors are responsible for obtaining permission from copyright holders for reproducing any illustrations, tables, figures or lengthy quotations previously published elsewhere. Tables and Figures should be presented on separate sheets of paper at the end of the article. Their position within the text should be clearly indicated.
*** For citing and referencing use the Harvard-style system. References in the text should read as Hall (1995: 63-4), or Hall and Smith (1993, 1998). Use 'et al.' when citing a work by more than two authors, e.g. Hall et al. (1997). The letters a, b, c etc. should be used to distinguish citations of different works by the same author in the same year, e.g. Hall (1988a, b). Enclose within a single pair of parentheses a series of references, separated by semicolons, e.g. (Hall and Smith, 1993; Jones, 1985). Use also parentheses to insert any brief phrase associated with the reference, e.g. (but see Jones, 2000: 23-4). For an institutional authorship, supply the minimum citation from the beginning of the complete reference, e.g. (Department of Health, 1996: 36). The reference list should be alphabetically ordered.
*** Authors will be asked to provide an electronic copy of the final version of their paper following acceptance for publication. The author is responsible for ensuring that the final hard copy and electronic versions of the manuscript are identical.
*** Authors are sent proofs for checking and correction. Proofs should be corrected carefully; the responsibility for detecting errors lies with the author. On publication authors will receive a bound copy and access to the final pdf of their article.
*** It is a condition that copyright should be assigned to Critical Social Policy Limited, with certain rights retained by authors. Further details will be sent to authors before publication. No paper will be published until the appropriate forms are completed and returned.

RESOURCES:
http://www.ualberta.ca/~csp/
http://www.sagepub.com/journalsProdManSub.nav?prodId=Journal200748

How do you know if a piece of research work is good or not? how are they evaluated?

How do you know if a piece of research work is good or not? how are they evaluated?

Research is the systematic process of collecting and analyzing information to increase our understanding of the phenomenon under study. It is the function of the researcher to contribute to the understanding of the phenomenon and to communicate that understanding to others.
Evaluation is a methodological area that is closely related to, but distinguishable from more traditional social research. Evaluation utilizes many of the same methodologies used in traditional social research, but because evaluation takes place within a political and organizational context, it requires group skills, management ability, political dexterity, sensitivity to multiple stakeholders and other skills that social research in general does not rely on as much. Here we introduce the idea of evaluation and some of the major terms and issues in the field.

There are many types of evaluations that do not necessarily result in an assessment of worth or merit -- descriptive studies, implementation analyses, and formative evaluations, to name a few. Better perhaps is a definition that emphasizes the information-processing and feedback functions of evaluation.
Both definitions agree that evaluation is a systematic endeavor and both use the deliberately ambiguous term 'object' which could refer to a program, policy, technology, person, need, activity, and so on. The latter definition emphasizes acquiring and assessing information rather than assessing worth or merit because all evaluation work involves collecting and sifting through data, making judgements about the validity of the information and of inferences we derive from it, whether or not an assessment of worth or merit results.

The generic goal of most evaluations is to provide "useful feedback" to a variety of audiences including sponsors, donors, client-groups, administrators, staff, and other relevant constituencies. Most often, feedback is perceived as "useful" if it aids in decision-making. But the relationship between an evaluation and its impact is not a simple one -- studies that seem critical sometimes fail to influence short-term decisions, and studies that initially seem to have no influence can have a delayed impact when more congenial conditions arise. Despite this, there is broad consensus that the major goal of evaluation should be to influence decision-making or policy formulation through the provision of empirically driven feedback.

There are many different types of evaluations depending on the object being evaluated and the purpose of the evaluation. Perhaps the most important basic distinction in evaluation types is that between formative and summative evaluation. Formative evaluations strengthen or improve the object being evaluated -- they help form it by examining the delivery of the program or technology, the quality of its implementation, and the assessment of the organizational context, personnel, procedures, inputs, and so on. Summative evaluations, in contrast, examine the effects or outcomes of some object -- they summarize it by describing what happens subsequent to delivery of the program or technology; assessing whether the object can be said to have caused the outcome; determining the overall impact of the causal factor beyond only the immediate target outcomes; and, estimating the relative costs associated with the object.
Formative evaluation includes several evaluation types:
ท needs assessment determines who needs the program, how great the need is, and what might work to meet the need
ท evaluability assessment determines whether an evaluation is feasible and how stakeholders can help shape its usefulness
ท structured conceptualization helps stakeholders define the program or technology, the target population, and the possible outcomes
ท implementation evaluation monitors the fidelity of the program or technology delivery
ท process evaluation investigates the process of delivering the program or technology, including alternative delivery procedures
Summative evaluation can also be subdivided:
ท outcome evaluations investigate whether the program or technology caused demonstrable effects on specifically defined target outcomes
ท impact evaluation is broader and assesses the overall or net effects -- intended or unintended -- of the program or technology as a whole
ท cost-effectiveness and cost-benefit analysis address questions of efficiency by standardizing outcomes in terms of their dollar costs and values
ท secondary analysis reexamines existing data to address new questions or use methods not previously employed
ท meta-analysis integrates the outcome estimates from multiple studies to arrive at an overall or summary judgement on an evaluation question
15 Steps to Good Research
1. Define and articulate a research question (formulate a research hypothesis).
2. Identify possible sources of information in many types and formats.
3. Judge the scope of the project.
4. Reevaluate the research question based on the nature and extent of information available and the parameters of the research project.
5. Select the most appropriate investigative methods (surveys, interviews, experiments) and research tools (periodical indexes, databases, websites).
6. Plan the research project.
7. Retrieve information using a variety of methods (draw on a repertoire of skills).
8. Refine the search strategy as necessary.
9. Write and organize useful notes and keep track of sources.
10. Evaluate sources using appropriate criteria.
11. Synthesize, analyze and integrate information sources and prior knowledge.
12. Revise hypothesis as necessary.
13. Use information effectively for a specific purpose.
14. Understand such issues as plagiarism, ownership of information (implications of copyright to some extent), and costs of information.
15. Cite properly and give credit for sources of ideas.


According to Simon Peyton Jones.
"How to write a good research paper"
Structure

 Abstract (4 sentences)
 Introduction (1 page)
 The problem (1 page)
 My idea (2 pages)
 The details (5 pages)
 Related work (1-2 pages)
 Conclusions and further work (0.5 pages)
Criteria for a good piece of research
by:Simon Peyton Jones and Alan Bundy
Major criteria

Here are the major criteria against which your proposal will be judged. Read through your case for support repeatedly, and ask whether the answers to the questions below are clear, even to a non-expert.
ท Does the proposal address a well-formulated problem?
ท Is it a research problem, or is it just a routine application of known techniques?
ท Is it an important problem, whose solution will have useful effects?
ท Is special funding necessary to solve the problem, or to solve it quickly enough, or could it be solved using the normal resources of a well-found laboratory?
ท Do the proposers have a good idea on which to base their work? The proposal must explain the idea in sufficient detail to convince the reader that the idea has some substance, and should explain why there is reason to believe that it is indeed a good idea. It is absolutely not enough merely to identify a wish-list of desirable goals (a very common fault). There must be significant technical substance to the proposal.
ท Does the proposal explain clearly what work will be done? Does it explain what results are expected and how they will be evaluated? How would it be possible to judge whether the work was successful?
ท Is there evidence that the proposers know about the work that others have done on the problem? This evidence may take the form of a short review as well as representative references.
ท Do the proposers have a good track record, both of doing good research and of publishing it? A representative selection of relevant publications by the proposers should be cited. Absence of a track record is clearly not a disqualifying characteristic, especially in the case of young researchers, but a consistent failure to publish raises question marks.
Secondary criteria
Some secondary criteria may be applied to separate closely-matched proposals. It is often essentially impossible to distinguish in a truly objective manner among such proposals and it is sad that it is necessary to do so. The criteria are ambiguous and conflict with each other, so the committee simply has to use its best judgement in making its recommendations.
ท An applicant with little existing funding may deserve to be placed ahead of a well- funded one. On the other hand, existing funding provides evidence of a good track record.
ท There is merit in funding a proposal to keep a strong research team together; but it is also important to give priority to new researchers in the field.
ท An attempt is made to maintain a reasonable balance between different research areas, where this is possible.
ท Evidence of industrial interest in a proposal, and of its potential for future exploitation will usually count in its favour. The closer the research is to producing a product the more industrial involvement is required and this should usually include some industrial contribution to the project. The case for support should include some `route to market' plan, ie you should have thought about how the research will eventually become a product --- identifying an industrial partner is usually part of such a plan.
ท A proposal will benefit if it is seen to address recommendations of Technology Foresight. It is worth looking at the relevant Foresight Panel reports and including quotes in your case for support that relate to your proposal.


RESOURCES:
http://www.socialresearchmethods.net/kb/intreval.htm
http://www.library.georgetown.edu/tutorials/research-guides/15-steps
http://research.microsoft.com/en-us/um/people/simonpj/papers/giving-a-talk/writing-a-paper-slides.pdf
http://research.microsoft.com/en-us/um/people/simonpj/papers/Proposal.html

role of research topic in deciding your future career

What do you think is the role of research topic in deciding your future career?

The role of research topic in deciding my future career is; it helps in molding my own skills, abilities, personal qualities, interests, and availability. A choice of career must depend primarily on an assessment of my own skills, abilities, personal qualities, interests, and availability. A future career includes all the roles we undertake throughout our life - education, training, paid and unpaid work, family, volunteer work, leisure activities and more. Career was traditionally associated with paid employment and generally referred to a single occupation. The concept of a job for life is no longer a reality. Young people now are likely to experience five to eight major career changes in their lives in a variety of industry sectors. They will also be experiencing more fluid forms of working with increasing casual, contract and part-time work options. Some activities that contribute towards a career can include different life roles, volunteer work, community activities, leisure activities, education and training - either undertaken in recognized educational institutions, work experience. In this new climate, individuals need to be adaptable, innovative, flexible, resilient and collaborative to thrive in all aspects of their life. It is critical to manage your life, learning, and work if you are to successfully navigate your way around a dynamic and complex economic landscape. Making the best career choices involves first knowing what you like (your interests), what you are good at (your skills and abilities), what is important to you (your values), second understanding the world of work – knowing about your available options and what jobs are out there, third learning how to make informed decisions about your possible options and lastly deciding and setting about achieving your objectives. Careers of the future will change how people communicate, learn and live. Some of these jobs are light-years away from creation, but for others, the future is now. Learn where you could be working.

Research topic is a category or general area of interest. In that way, finding a research topic is like finding my future career because I’m looking for a topic that may interest me. If I begin thinking about possible topics when the assignment is given, I already began the difficult, yet rewarding, task of planning and organizing. Once I made the assignment a priority in my mind, I may begin to have ideas throughout the day. Brainstorming is often a successful way for students to get some of these ideas down on paper. Seeing one's ideas in writing is often an impetus for the writing process. Though brainstorming is particularly effective when a topic has been chosen, it can also benefit the student who is unable to narrow a topic. It consists of a timed writing session during which the student jots down—often in list or bulleted form—any ideas that come to his mind. At the end of the timed period, the student will peruse his list for patterns of consistency. If it appears that something seems to be standing out in his mind more than others, it may be wise to pursue this as a topic possibility.

It is important to keep in mind that an initial topic may not be the exact topic about which I end up writing. Changeableness is common in research, and should be embraced as one of its many characteristics. Research is a process of investigation. It is an examination of a subject from different points of view. It’s not just a trip to the library to pick up a stack of materials, or picking the first five hits from a computer search. Research is a hunt for the truth. It is getting to know a subject by reading up on it, reflecting, playing with the ideas, choosing the areas that interest you and following up on them. Research is the way you educate yourself.

Research paper means working with stacks of articles and books, envisioning sources of information--articles, books, people, and artworks. Yet a research paper is more than the sum of your sources, more than a collection of different pieces of information about a topic, and more than a review of the literature. A research paper analyzes a perspective or argues a point. Regardless of the type of research paper you are writing, your finished research paper should present your own thinking backed up by others' ideas and information.

In deciding my future career, the role of research topic is to explore my skills, attitudes, and interests. Having a research topic is a way to educate me. It prepares me in choosing my future career and also molding me to be a good one, by having good skills, nice attitude and most importantly my future career must connect my interest in life.

Sunday, October 4, 2009

variety of technical topics

Department of Computer Science
Xavier University - Ateneo de Cagayan


University Mission Statement on Research
"As a university, Xavier pursues truth and excellence in teaching, research and service to communities: it is concerned with the preservation of the environment and the integrity of creation; it prepares men and women with competence, skills and keen sense of responsibility to their communities ..."
College of Engineering Mission Statement on Research
"As a resource center, it shall establish a science and technology research center, develop information technology and serve as a forum for dialogues on technology and social impact"
Mission Statement
"As teaching, research and service unit, it shall conduct applied research in computer science to develop and adapt new methods and improve on existing solutions for harnessing information and communication technologies in the service of the community"
General Objectives
The department shall conduct research in the areas of databases and information systems; multimedia and computer-aided learning systems; networks and computer systems; and programming and software engineering systems:
ท To establish a system for investigating new ways of harnessing information and communication technologies in the service of the community;
ท To take the lead in developing computer-based alternative and interactive methods of learning and learning management;
ท To put in place a mechanism for responding to the rapid pace of technological change and the systematic updating of the computer science curriculum and syllabi;
ท To encourage and provide incentive to faculty and students to pursue research in collaboration with academe, industry and government.

Units of Study
• Each area of computer science chooses different units of study.
– In algorithms: algorithms
– In AI: methods, techniques, algorithms
– In Languages: languages, language components or features
– In Architecture: instruction sets, memory hierarchies, architectures
– In Theory: models, theorems, proof techniques
– In Systems: systems, components of systems, system architectures

Research Topic/Area of Interest
v Computer-based Tutorial Systems/Database Systems
v Intelligent Tutoring Systems
v Comparative Study of UML-Supported and Non-UML Supported Software Development
v Executive Information Systems
v Information Storage and Retrieval in an Intranet
v GIS-based Visulization and Retrieval Tool for Population Census Data
v ICT in the Classroom: Elementary and High School
v Web-enabled Database Application (CDO Doctors Information System)
v Developing Web-based Business Applications using Distributed Databases or Web-based Geographic Information Systems
v Expert System for Academic Evaluation/Counseling
v Computer-Assisted Business System Engineering Tool: Benefits of Open Source and Reusable Objects
v High Speed Wireless Networks
v Virtual Memory Management
v An IP Address-less Bridge-Firewall
v Wireless LAN for Small and Medium Enterprises


Each research has Research Group
ISDM - Information Systems and Data Management Group
MCAT - Multimedia and Computer-Aided Tools Group
NWCS - Networking and Computer Systems Group
SEPL - Software Engineering and Programming Languages

Major Research Areas/Groups
1. Information Systems and Data Management Group
The Information Systems and Data Management Group is to develop new solutions to local problems in the integrated management and retrieval of data, information and knowledge in highly distributed networked environments in academe, industry and government. The group is also interested in information systems resource management, systems development methodologies and the use of information systems as a competitive advantage and as a tool for decision making at the operational, tactical and strategic levels of an organization. Specific research interest include digital libraries, data warehousing, data mining, information search and retrieval, Web-enabled databases, electronic commerce applications, object-oriented databases and distributed databases; systems development methodologies, enterprise collaboration systems, social, ethical and security issues in information systems, electronic business and decision support systems.
2. Multimedia and Computer-Aided Tools Group
The Multimedia and Computer-Aided Tools Group is to pursue research and development efforts in educational and other applications of multimedia, computing, communications and connectivity to education and the learning and other work processes in and outside the university. Specific research interest include educationally-oriented programming language tools, online course delivery and management; intelligent tutoring systems and adaptive learning environments; distance learning, collaborative teaching, user interfaces and human computer interaction; CASE tools, applied artificial intelligence, human factors engineering, ergonomics, systems engineering and work flow/schedule optimization.
3. Networking and Computer Systems Group
The Networking and Computer Systems Group will pursue a variety of research topics related to computer networks and distributed computer systems. The research work combines theoretical foundations with practical applications and will involve interaction with the user community. Specific research interests include network resource management and monitoring, distributed systems and tools; client-server computing; parallel systems; communication protocols for continuous media such as video and audio over the current Internet networks as well as over high-speed networks; multimedia network protocols, extensible operating systems to support multimedia, enterprise computing and connectivity standards.
4. Software Engineering and Programming Languages Group
The Software Engineering and Programming Languages Group will conduct research in a variety of areas including contemporary programming languages, compilers and software engineering methodologies; concurrent and event driven software. Specific research interest include software architecture, software evolution, and rapid migration; advanced programming languages, including design, semantics, implementations, programming environment tools, collaborative programming, object-oriented programming and prototyping.
Research Group Membership and Responsibilities
ท Conducting research and extension services in their respective areas;
ท Periodically suggesting updates to the curriculum based on research findings;
ท Systematically developing new or improving existing syllabi/course material in subjects adjunct to their respective research areas;
ท Carrying out collaborative research and extension services with partners from other academic institutions, industry and government.
Research Life Cycle
• Definition. Exploratory research defines a new problem, new constraints, new opportunity, or a new approach.
• Initial Solutions. Initial algorithms, designs, theorems, programs are developed.
• Evaluation of Initial Solutions. Initial solutions are evaluated and refined in isolation.
• Comparison of Solutions. Solutions are compared to one another and also to ideal solutions.
• Space of Possible Solutions. Theorems are proved about the limits on any solutions. Existing solutions are placed in a common framework to determine whether all possible solutions have been found.
• Technology Transfer. Best approaches are transferred to users.

• Not all of these phases are seen in all areas. For units with high cost of evaluation only relatively weak methods can be applied to evaluate initial solutions and compare solutions.
• For units with high variety, it is difficult to understand the space of all possible solutions.

http://www.ithaca.edu/library/course/methodstoc.html

scientific papers

Research Paper No. 1549
Information Sharing in a Supply Chain
L. and Seungjin
https://gsbapps.stanford.edu/researchpapers/library/rp1549.pdf


Summary
Advances in information system technology have had a huge impact on the evolution of supply chain management. As a result of such technological advances, supply chain partners can now work in tight coordination to optimize the chain-wide performance, and the realized return may be shared among the partners. A basic enabler for tight coordination is information sharing, which has been greatly facilitated by the advances in information technology. This paper describes the types of information shared: inventory, sales, demand forecast, order status, and production schedule. We discuss how and why this information is shared using industry examples and relating them to academic research. We also discuss three alternative system models of information sharing – the Information Transfer model, the Third Party Model, and the Information Hub Model.

Evaluation
This paper was all about information sharing. The Abstract was brief and precise. The paper did not follow the standard format that I know, it has its own format to express and explain well the model, types, and constraints of information sharing.
The paper is organized as follows. Section1 was the Introduction Section 2 describes the types of information shared and the associated benefits. Section 3 discusses alternative system models to facilitate information sharing. Section 4 addresses the challenges of information sharing.
Regarding the presentation of the paper it is not well arranged, the survey results were on the last part of the paper. While the references was on the upper part. The paper uses many examples to illustrate each model of information sharing and types of shared information.



Wait-free Programming for General Purpose Computations on Graphics Processors
http://www.cs.chalmers.se/~tsigas/papers/Wait-Free-GPGPU-IPDPS08.pdf


Summary
This paper aims at bridging the gap between the lack of synchronization mechanisms in recent GPU architectures and the need of synchronization mechanisms in parallel applications. Based on the intrinsic features of recent GPU architectures, the researchers construct strong synchronization objects like wait-free and t-resilient read-modify-write objects for a general model of recent GPU architectures without strong hardware synchronization primitives like test-andset and compare-and-swap. Accesses to the wait-free objects have time complexity O(N), whether N is the number of processes. The fact that graphics processors (GPUs) are today's most powerful computational hardware for the dollar has motivated researchers to utilize the ubiquitous and powerful GPUs for general-purpose computing. Recent GPUs feature the single-program multiple-data (SPMD) multicore architecture instead of the single-instruction multiple-data (SIMD). However, unlike CPUs, GPUs devote their transistors mainly to data processing rather than data caching and flow control, and consequently most of the powerful GPUs with many cores do not support any synchronization mechanisms between their cores. This prevents GPUs from being deployed more widely for general-purpose computing.

Evaluation
This paper was more on the algorithms, at first look it is really complicated, but it is well explained by the figures and formula on how they come up with it to have the desired results. I also noticed that they used statements like if, if else statement and also for loop.
The result demonstrates that it is possible to construct wait-free synchronization mechanisms for graphics processors (GPUs) without the need of strong synchronization primitives in hardware and that wait-free programming is possible for graphics processors (GPUs). Most of the paper content was algorithms of process.




Summary
We are at the beginning of the multicore era. Computers will have increasingly many cores (processors), but there is still no good programming framework for these architectures, and thus no simple and unified way for machine learning to take advantage of the potential speed up. In this paper, we develop a broadly applicable parallel programming method, one that is easily applied to many different learning algorithms. Our work is in distinct contrast to the tradition in machine learning of designing (often ingenious) ways to speed up a single algorithm at a time. Specifically, we show that algorithms that fit the Statistical Query model [15] can be written in a certain “summation form,” which allows them to be easily parallelized on multicore computers. We adapt Google’s map-reduce [7] paradigm to demonstrate this parallel speed up technique on a variety of learning algorithms including locally weighted linear regression (LWLR), k-means, logistic regression (LR), naive Bayes (NB), SVM, ICA, PCA, gaussian discriminant analysis (GDA), EM, and backpropagation (NN). Our experimental results show basically linear speedup with an increasing number of processors.

Evaluation
The paper focuses on developing a general and exact technique for parallel programming of a large class of machine learning algorithms for multi-core processors. The Abstract was brief and precise. The paper follows the standard format. Also use graph, formula and statistical models that are easy to understand. The paper shows a good theoretical computational complexity results.

Friday, June 26, 2009

HELLOOOOO

Hi I'm Ace Andrion Sandoval, a 4th year Bachelor of Science in Computer Science student.
I'm an eager learner.
Most of the time, I challenge myself to do something better than someone else.
I expect to be challenged again to do a research worth my effort and time.
I expect to have a research material that could somehow help other people.