Monday, January 27, 2020

A Review On Enterprise Resource Planning Systems Information Technology Essay

A Review On Enterprise Resource Planning Systems Information Technology Essay INTRODUCTION An enterprise resource planning (ERP) systems is that they integrate across functions to create a single, unified system rather than a group of separate, insular applications. As ERP system is providing optimal solutions and strong control over the company operations, every business is looking forward for this adoption. Since currently available ERP softwares are charging at higher level of licencing and supporting costs, businesses are in the necessity of finding an open source alternative. This document provides the feasible open source alternative option to the current market leader in the proprietary ERP SAP ECC. Open Source ERP Systems: The following are the popular open source ERP systems available in the latest market. Though there are many number of open sources are available like Opentaps, Ofbiz, ERP5, and so on here only considered few which are moderately fulfil business requirements in compare with SAP solutions. Adempiere This is one of the major ERP leaders in the open source technologies and has been resulted most successful in small and medium industry users mainly in Retail, Trading, Manufacturing and service sectors. [Adempiere Release Manual]. This is highly motivated and active community based software and is in the top 5 positions according to sourceforge.net. Compiere In the current era, Compiere is the most popular open source ERP+CRM application. It is a comprehensive solution for SMEs. This provides solutions in Distribution, Retail, Manufacturing and Service industries with highly adaptable and easy to use enterprise class applications. In ERP, first time Compiere started revolutionary design through which applications are enabled with easy customization and extension without any programing. Openbravo This is more commercial oriented open source ERP. ERP solution provides a robust application which integrates distribution, inventory, E-commerce, accounting and point of sale workflows. This has been received continuously best open source awards in 2009 and 2010 from Infoworld and many more recognitions from various Organizations. It is developed in java and oracle, postgres SQL databases can be used. OpenERP(Farmer TinyERP) This is comprehensive suite for all operations of an Enterprise. It follows the modular approach which helps customer to initiate one application and then add others as they go. This is designed through a famous 3 tier MVC architecture. Written in Python and Database is PostgreSQL. Clients are required to install flash components in their web browser to access. OpenPro OpenPro is a leader in licensed Enterprise Resource Planning (ERP) software using open source technology and also this is the first web based ERP software started on demand in the market since 1988. This software designed as a platform independent and is written in open source php. Over the years, Software has shown continuous improvements by providing advanced features along with application stabilization. This is recognized as best suited for the larger businesses. Open Source ERP Pros and Cons The following are the advantages and disadvantages of above open source alternatives. ADempiere Advantages: Architecture Model driven Architecture Active data dictionary reducing 80% coding work in customization Browser/Server + Client/Server Database independent : PostgresSQL/Oracle/ (MySQL) Function Structure Provides ERP, CRM, POS. Manufacture module Multi Organization, Multi currency, Multi accounting, Internationalization Market Top 5 in SourceForge.net. Existing customer / user base. Rich practice in real business environment, rather than a guinea pig in the library. Community Highly motivated and active. Global support. Disadvantages: Community No formal political structure to make decisions. No specified road map. Lack of sufficient funding, cant afford core developers. Market Not very well-known to the general public. Customer / user base is small compared to SAP. Implementation Not simple enough for quick implementation which is important for small-sized enterprise. Total Cost of Ownership (TCO) is shifted from licensing costs to lack of expertise local support in some countries. Compiere Advantages Multi company and complex corporate hierarchy support Multi currency and multi language support Delivers fully integrated ERP product with complex warehouse management processing and etc. Rich internet applications by using ajax integration and this delivers functionality, usability, responsiveness and personalization through a Web browser The system uses a centralized and active application dictionary to store meta-data and rules to managing custom solutions Easy upgrades without programming which sustains the customer customization through upgrade tool Provides complete application level security Vendor independent Stable and well tested No information hiding , full transparency of coding by open source Extraordinary wide reach, no starting costs Model driven architecture, Application directory is available Any sql database is supported Disadvantages Not currently complete opensource, since the choice of Oracle database. Based on thick client Java Swing GNU licence needs derivative work to be returned. OpenBravo Advantages Easy integration with other applications since it supports REST and SOAP services. Provides training, support, consulting, outsourcing options to partners and clients at lower costs means lower total cost of Ownership. Web based ERP Revolutionary architecture Unique combination of MVC and MDD, innovative approach to build and maintain software coding. Scalability Openbravo can scale irrespective of the size and sector of the company Easy installation and No vendor/supplier lock in Modularity without any maintenance issues application can be easily extended. Disadvantages Even an ordinary person can write a PHT client class for REST service. This leads to security issues and shows the doubt on code reliability Currently at lower Support and maintenance cost, future is not guaranteed since OpenBravo is commercial focused. Loose coupling between database integration in coding perspective. OpenERP (Farmer TinyERP) Advantages It has majority of all other application advantages. Modular structure helps to easy adoption of new applications Certain customization can be done through web browser in online SAAS services are available Very small footprint, windows installer of just 85MB, installation will done in minutes time. Ease of use Advanced technology usage Very Innovating software double entry management in inventory control Internationalization 72% scored from Independent experts group evaluation in Open source erp softwares. Disadvantages OpenPro Advantages Applicable for all sizes of businesses Written on open source PHP First web based software, mature , reliable and faster Since web based, clients or users or sales representatives can access system from anywhere remotely through a web. No maintenance cost Shorter implementation time Disadvantages No frequent updates for open source whereas available for OpenPro commercial products. No support at free of cost. Need to pay More commercial oriented rather than open source. Evaluation of Open Source ERP System Functional Fit Flexibility Support Continuity Maturity Customization Flexibile upgrades Internationalization User Friendliness Architecture Scalability Security Interfaces Operating System Independence Database Independence Programming language Support infrastructure Training Documentation Project structure Community activity Transperancy Update frequency Other lock in effects Development status Reference sites In order to select the best open source alternative hierarchically structured criteria has been chosen. All open source alternatives information has been categorized into 5 groups and compared with each of other along with the need of company requirements. The following comparison table gives the clear picture of selected ERP system. ERP System Implementation Strategy Implementation is the key process for which company needs to identify a strategic approach. As per Guido Capaldo Study, proved that a planned oriented approach is required for estimating the capabilities that firms should have in order to select the more appropriate implementation strategy. Implementation process divided into 5 phases for smooth execution. Each phase has its own deadline to successful completion. Phase 1: Strategic Planning Project Team: Forming a project team with first line employees from each department and senior management. SMART objectives will be prepared for entire team and team members will be assigned with specific task. Activity tracker will be designed to track each activity such as timelines, training plan formulation, objective finalising. Examine current business processes: Team should examine their individual department business processes whether they are ready to automate or to identify any gaps which need to be fulfilled. Set Objectives: Clearly defined objectives needs to be set. Since implementation is a major task, setting S.M.A.R.T objectives are more crucial. In order to define an objective, team should be able to understand the scope of the business. Develop a project plan: Team should develop a project plan with the clearly defined objectives, timelines, and training procedures with each team member individual responsibilities stated. Result of this, all team members todo list will be clearly defined. Phase 2: Procedure review Review Software Capabilities Train on each and every aspect of OpenERP software. Project team dedicatedly review the software capabilities. Ensure that there are no any technical gaps. Identify manual processes Project team should identify the manual processes for automation and also documented well for the rollout steps. Develop standard operating procedures This is one of the critical success factors for smooth implementation of an ERP. Every aspect of the business needs to be well documented as per SOPs. Ensure that properly updated when SOP changes. Phase 3: Data Collection and Clean up Convert Data Identify the data which needs to be converted Collect new Data Review all data input Cleanup data Phase 4: Training and Testing Pre-test the database Verify Testing Train the trainer Perform final testing Phase 5: Go live and Evaluation Develop final go live checklist Evaluate the solution

Saturday, January 18, 2020

Greendale Stadium Case

BONTE Geoffrey KERTESZ Samuel BONTE Geoffrey KERTESZ Samuel Professors  : Elisabeth KJELLSTROM Nikos MACHERIDIS Professors  : Elisabeth KJELLSTROM Nikos MACHERIDIS ASSIGNMENT 1: Essay on a case Greendale Stadium Case ASSIGNMENT 1: Essay on a case Greendale Stadium Case FEKH13 – Project Management A Business Perspective FEKH13 – Project Management A Business Perspective November 19 2012 November 19 2012 Questions 4th Edition of the book. 1. Will the project be able to be completed by the May 20 deadline? How long will it take? Yes, the project will be finished by March 27th 2009. That means 54 calendar days ahead of schedule.It takes 695 days to be completed. 2. What is the critical path for the project? There are two critical paths that share the same beginning and end. They differ from only two separate activities: * Clear Stadium Site => Drive Support Piles => Pour Lower Concrete Bowl => Pour Main Concourse => Install Seats => Construct Steel Canopy => Light Insta llation => Inspection. * Clear Stadium Site => Drive Support Piles => Pour Lower Concrete Bowl => Construct Upper Steel Bowl => Install Seats => Construct Steel Canopy => Light Installation => Inspection.If the total project time has to be reduced, the length of the critical path has to be shortened. The length of critical path is equal to the sum of durations of critical tasks. Here, it is equal to 695 days. Any delay of a critical task will delay the entire project. The essential technique for using CPM is to construct a model of the project that includes the following: * A list of all activities required to complete the project (typically categorized within a work breakdown structure), * The time (duration) that each activity will take to completion, * The dependencies between the activities. 3.Based on the schedule would you recommend that G&E pursue this contract? Why? Include a one page Gantt chart for the stadium schedule. Yes as the estimated completion date is March 27th 20 09. It is 54 calendar days ahead the deadline, 38 working days. It means that they have a buffer of 38 working days. Moreover, even if there are two critical paths, as mentioned they differed from only two separate activities. Finally, if too much delay occurs, weekend or over-time can be used to catch up. Defining the Project Project overview Project name: Greendale Baseball Stadium. Location: Greendale, Milwaukee, US (hypothesis).Type: Design and build. Owner: G&E Company. Scope: Build 47,000 seats baseball stadium. Time frame: 01/07/2006 – 20/05/2009. Potential profit: $2,000,000. Penalty clause: $100,000 per day of delay. Step 1: Defining the Project scope A. Project objective To construct a 47,000 seats baseball stadium within 2 years, 10 months and 20 calendar days (i. e. : in time for the start of the 2009 season). The potential profit is $2,000,000. B. Deliverables 47,000 seats roofed baseball stadium including: playing field, luxury boxes, jumbotron (large-screen tel evision) bathrooms, lockers, restaurants etc. C. Milestones . Permits approved (if not already) – Before July 1st 2006. 2. Site ready for the construction – March 5th 2007. 3. Foundation poured, field, concourse and upper bowl completed – March 12th 2008. 4. Infrastructure and equipment installed, construction of the roof on a separate site done – October 20th 2008. 5. Installation of the roof and lights – February 27th 2009. 6. Inspection – March 27th 2009. D. Technical requirements (Hypothesis, based on FIFA technical sheets) 1. Pre-construction decisions: a. Playing field orientation, to take advantage of the day light. b. Environment compatibility of stadium use. . Community relations. d. Multi-purpose stadiums. 2. Safety: e. Structural safety. f. Fire prevention. g. Safe exits. h. Television surveillance system. 3. Playing area: i. Dimensions. j. Field type and quality (natural, artificial grass). k. Advertising boards around playing ar ea. l. Access to playing area. m. Exclusion of spectators from playing area. 4. Players and match officials: n. Access to dressing rooms. o. Dressing rooms, toilets. p. Access from team areas to playing field. q. First aid and treatment room. 5. Spectators: r. Standards of comfort for the seats. s. Communication with the public. t.Access for disabled persons. u. Merchandise concession stands. v. Ticketing control. w. Bathrooms. 6. Hospitality: x. Luxury boxes. y. Restaurants. 7. Media: z. Press box. {. Stadium media centre. |. Television infrastructure. 8. Lightning and power supply: }. Power supply. ~. Facility requirements. . Lightning design specifications and technology. . Environmental impact. 9. Structure . Retractable roof specifications. E. Limits and exclusions 1. Few specifications are given (poor content of appendix). 2. G&E build but will not manage. 3. Restaurants and cafeterias’ furniture are not included in the contract. . Contractor responsible for subcontrac ted work 5. Site work limited to Monday through Friday, 8:00 am to 6:00 pm. The following holidays are observed: January 1st, Memorial Day, July 4th, Labor Day, Thanksgiving Day, December 25 and 26. F. Customer review Unknown, but it could be the city sportive commission. Step 2: Establishing the Project Priorities â€Å"Quality and the ultimate success of a project are traditionally defined as meeting and/or exceeding the expectations of the customer and/or upper management in terms of cost (budget), time (schedule), and performance (scope) of the project. A good trade-off has to be made among time, cost and performance. The objective of the project is a Baseball Stadium that we assumed has to last for at least 50 years. Thus project priorities are especially performance but also time as the stadium has to be finished before season start of 2009. Cost has to be taken into account, but doesn’t not represent the main focus within this project. Because of that, this project re present a risk for G&E as cost flexibility is really limited. | Time| Performance| Cost| Constraint| | | | Enhance| | | |Accept| | | | Figure 1: Project Priority Matrix Time: The schedule of the project has to be respected otherwise a penalty clause of $100,000 per day will be applied, which represents 5% of the estimated profits of the project. But it can be reduce. This is why it is a enhance priority. Performance: Performance of the project is fixed, they can’t be compromised and has to be respected. Cost: Going over budget is acceptable though not desirable, especially considering the small estimated profit in comparison of the size of the project.Step 3: Creating the Work Breakdown Structure The WBS is a map of the project, â€Å"it is an outline of the project with different levels of detail. † We divided it into 3 main points: 1. Initial planning and discussions with management team This category regroups the upper management decisions. It analyses the whole pro ject and selects a project manager as well as a team. 2. Project management activities This category regroups the middle management issues such as cost management, human resources managements, risk management†¦ 3. Building stadiumThis category specified in order the tasks needed for the construction of the stadium. They are the same as the ones used in the Gantt chart. REFERENCES Baker, S. (2004). Critical Path Method (CPM), University of South Carolina, Health Services Policy and Management Courses. FIFA. (2007). Football Stadiums: Technical recommendations and requirements (4th edition), [pdf]. From http://www. fifa. com/mm/document/tournament/competition/football_stadiums _technical_recommendations_and_requirements_en_8211. pdf Gray, C. , Larson, R. (2008).Project Management: the managerial process (Fourth Edition). Singapore: Mc Graw Hill, International Edition. ——————————————à ¢â‚¬â€œ [ 1 ]. Baker, S. (2004). Critical Path Method (CPM), University of South Carolina, Health Services Policy and Management Courses. [ 2 ]. FIFA. (2007). Football Stadiums: Technical recommendations and requirements (4th edition). [ 3 ]. Gray, C. , Larson, R. (2008). Project Management: the managerial process, p. 95. [ 4 ]. Gray, C. , Larson, R. (2008). Project Management: the managerial process, p. 97.

Friday, January 10, 2020

The aim of this experiment was to investigate whether or not certain foods contained the different food groups

Abstract – The aim of this experiment was to investigate whether or not certain foods contained the different food groups. If it turned black that meant it had starch, if it turned red that meant it had glucose, if it turned purple it meant it had protein and if it turned clear/transparent it had fat. The equipment used was vegetable oil, pallet, pipette, starch, glucose, albumin, iodine solution, and Benedict’s solution and copper sulphate solution. The method was simple, mixing each type of food with the food group to see if it would change colour and if it contained this food group, for example if iodine was added to starch and its colour changed it will have contained iodine in it. Introduction – All the processes of life require energy, and this energy comes from food groups. Carbohydrates provide energy for movement and this is made up of carbon, hydrogen and sugar. It is found in cereals and pasta. Proteins are used to assist growth and repair for the body it is made up of amino acids and is found in meat and fish. Fats are used to provide a concentrated source of energy and to insulate the body in cold temperatures. Saturated fats are obtained from animals such as meat; however polyunsaturated fat comes from vegetables. Vitamins are necessary in small amounts for growth however different vitamins have different functions; vitamin A is required to give good vision and it is from vegetables such as carrots. Vitamin B releases energy from food and it is obtained from milk and bread. Vitamin C gives healthy skin and this is from oranges and other fruits. Vitamin D helps to absorb calcium and this comes from margarine and oily fish. The digestive system has two main functions, one of them is to convert food into nutrients in what the body requires, the other functions is to remove any waste which may be in the body. Method – Starch test: First collect the food sample (liquid) and add a few drops of iodine solution (yellow/brown) and then check if the colour changes from yellow/brown to blue black ink then it will contain starch. Fat test: We collected the food sample and we put it on a piece of paper and if it went through it contains fat. Protein test: After we collected the food samples which were liquid in a test tube and added 5 drops of copper sulphate and 5 drops of sodium hydroxide and if the colour changes from blue to purple then the food sample does contain protein. Sugar : First collect the food samples in a test tube and add a few drops of benedicts solution. Then place the test tube in a boiling water bath. If the colour changes from blue to green and then to orange to red the sample contains sugar. The colour changes depending on the concentration of the sample. (P6) The body requires all the nutrients to remain healthy however different people will need a different variety and portion size compared to others, for example athletes will need a high diet in carbohydrates and meat because they need it to release energy and strengthen their bones more than an office worker would need it, however an office worker will need a lot of vitamin A because it spends its time on the computer working and this can damage their eyes. If a person ate too much of the food groups then that person will have become over nourished and if that isn’t worked off then the body will become fatter and overweight, too much fatty foods and oils cause this. However if a person ate too less of the food groups then that person will become under nourished which means that the body will become skinny and underweight. This is caused due to a lack of diet. If a person ate too less of vitamin A then the eyes will have problems seeing at night however if it was too less vitamin C then the mouth will become affected and it will develop scurvy. The body has different enzymes which digest the different food groups – Protease is the enzyme which attains to the breakdown of protein, Lipase is the enzyme which attains to the breakdown of fat and amylase is the enzyme which attains to the breakdown of starch. Method – Mixing each of the food groups into the different test such as iodine test to see if it has this substance in it. Other enzymes such as Maltase digests maltose to glucose, Lactase digests lactose to glucose and galactose, Sucrose digests sucrose to glucose and fructose.

Thursday, January 2, 2020

A Comparative Analysis Of The Results Finance Essay - Free Essay Example

Sample details Pages: 21 Words: 6278 Downloads: 4 Date added: 2017/06/26 Category Finance Essay Type Analytical essay Did you like this example? Abstract This study is based on the original M3-Competition. (The M3 competition was a competition designed to examine the forecasting capabilities of several forecasting organisations). The project, which uses the M3 data, replicates the results obtained by the original researchers and confirms the calculations of their study in terms of a SMAPE (Symmetric Mean Percentage Error) analysis. Don’t waste time! Our writers will create an original "A Comparative Analysis Of The Results Finance Essay" essay for you Create order The data was also analysed using an alternative error analysis methodology (ROC Rate of Change) and conclusions drawn on the comparative analysis of the results. In conclusion this study has shown that the findings drawn in the original M3 study did differ from those obtained using the ROC methodology, although there was some general agreement in the context of complexity or otherwise of the forecasting methodologies employed. For example, the ROC methodology showed that one of the top performing methods was the Theta this in agreement with the SMAPE analysis which ranked it as the best overall performing method. Given also that the Theta method is considered as a simple forecasting approach this tends to confirm the conclusions drawn from the original study. As previously mentioned, this study also showed that there were differences in the overall rankings, using the two different methods of comparison, between the 24 different methods used in the original study. This study also showed that there are differences between the published results of the original study and those replicated in this study. Declaration I hereby declare: That except where reference has clearly been made to work by others, all the work presented in this report is my own work; that it has not previously been submitted for assessment; and that I have not knowingly allowed any of it to be copied by another student I understand that deceiving or attempting to deceive examiners by passing off the work of another as my own is plagiarism. I also understand that plagiarising the work of other of another or knowingly allowing another student to plagiarise from my work is against the University regulations and hat doing so will result in loss of marks and possible disciplinary proceeding against me. SignedÃÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒâ€šÃ‚ ¦ÃƒÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒâ€šÃ‚ ¦ÃƒÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒâ€šÃ‚ ¦ÃƒÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒâ€šÃ‚ ¦ÃƒÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒâ€šÃ‚ ¦ÃƒÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒâ€šÃ‚ ¦ÃƒÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒâ€šÃ‚ ¦ÃƒÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒâ€šÃ‚ ¦ÃƒÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒâ€šÃ‚ ¦ÃƒÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡  ¬Ãƒâ€šÃ‚ ¦.. Data ÃÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒâ€šÃ‚ ¦ÃƒÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒâ€šÃ‚ ¦ÃƒÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒâ€šÃ‚ ¦ÃƒÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒâ€šÃ‚ ¦ÃƒÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒâ€šÃ‚ ¦ÃƒÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒâ€šÃ‚ ¦ÃƒÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒâ€šÃ‚ ¦ÃƒÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒâ€šÃ‚ ¦ÃƒÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒâ€šÃ‚ ¦ÃƒÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒâ€šÃ‚ ¦ÃƒÆ' ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒâ€šÃ‚ ¦. Table of figures Table 1 Number of negative data in each forecasting method Table 2 SMAPE across the 18 forecasting horizons Table 3 SMAPE between 18 forecasting horizons boundaries Table 4 Comparative ranking of SMAPE between published results in the M3-Competition and those calculated in this study. Table 5 Table 5 ROC Error on Single across the 18 forecasting horizon Table 6 Ranking of ROC Results Table 7 ROC Result per observation Table 8 Comparative ranking between SMAPE and ROC Table A1 Comparative ranking between ROC and SMAPE Table A2 ROC Result Table A3 ROC of Single across 18 forecasting horizons Table A4 ROC of Winter across 18 forecasting horizons Graph 1 Comparative ranking of SMAPE between published results in the M3-Competition and those calculated in this study Graph 2 Matching difference between published results in the M3-Competition and those calculated in this study Graph 3 Z/O Z/A chart, by John (2004) Graph 4 Comparative ranking be tween SMAPE and ROC Graph A1 Matching difference of Rank between ROC and SMAPE Graph A2 Single ROC on the 11th forecasting horizon Graph A3 Winter ROC on the 9th forecasting horizon Section Content Page 1.0 Introduction 1 2.0 Study of Forecasting Competition 3 2.1 Previous Study 3 2.2 M-Competition 4 2.3 M2-Competition 6 2.4 M3-Competition 7 3.0 Source data 9 3.1 Data format 9 3.2 Actual data 10 3.3 Forecasted data 13 3.4 Data source error 18 4.0 SMAPE Concept and Calculation 19 4.1 Definition 19 4.2 Calculation 20 4.3 Results 21 4.4 Matching M3-Competition data 23 5.0 ROC (Rate of change) Concept and Calculation 26 5.1 Definition 26 5.2 Results 29 5.3 Ranking ROC Result 30 6.0 Comparative analysis between MAPE and ROC 32 7.0 Discussion and Conclusion 36 Reference 39 Appendix Comparative different between SMAPE and ROC Result i ROC Result iii Single Result iv Winter Result vi Summary of the result from the 24 forecasting methods viii File list The lists of all the files include were listed on File list.txt, which more information and format of the files would be explained. 1.0 Introduction Prediction has become very important in many organisations since decision-making process rely mostly on prediction of future event. From the important of these forecasts, many forecasting methods have been applied and used. Furthermore, measurement errors had been implied to forecasting methods to determine their performance. In this study, M3-Competition is to be re-analysed and also investigate with ROC (Rate of Change) methodology. M3-Competition was published in 2000, from the researchers at INSTEAD, Paris. Aims The project had explored and been investigated the conclusions and subsequent commentaries from the original M3-Competition and then undertake an analysis, based on the Rate of Change methodology, on the original data sets and draw comparison of the results. Objectives The study would be involved in the tasks as followed Study the work of all M-Competitions and also related previous work Replicate the result of SMAPE (Symmetric Mean Average Percentage Error) introduced in M3-Competition. To undertake the ROC (Rate of Change Methodology) on the M3 data and produce consequent error Compare the variance measurement errors between SMAPE and ROC Conclude on error measurement goodness 2.0 Study of Forecasting Competition 2.1 Previous Studies Early studies on forecasting accuracy, in the context of this report, were started in 1969. At that time, the studies were only based on limited number of methods. In 1979, Makridakis and Hibon expanded the range and scope of such studies. The study compared 111 time series drawn from real-life situations such as business, industry and macro data. Theils U-Coefficient and MAPE (Mean Average Percentage Error) were used as the measures of accuracy. The major conclusion from these studies was that simple methods such as the smoothing method out performed the more sophisticated ones, as reported in M3-Competition by Makridakis et al. (2000, p. 452). However these conclusions conflicted with the accepted views at that time. 2.2 M-Competition Despite the critics, Makridakis continued his argument by introducing M-Competition (1982). This time the number of series was increased by 1001 and also the number of methods increased to 15. In addition different trials of the same method were also tested. Minor changes were made to the general structure of the competition such as the type of series, which changed to macro, micro, industry, demographic. The observations were arranged as 18 for monthly, 8 for quarterly and 6 for yearly. Also additional measurement errors were added, these were Mean Square Errors, Average Rankings and Median of Absolute Percentage errors. From the results, the four conclusions drawn by Makridakis et al. (1982, 2000, p. 452) were: 1. It was not true that statistically sophisticated or more complex methods, out performed simpler methods. 2. The relative ranking of the various methods varied according to the method of accuracy measurement used. 3. The forecast accuracy, when various indivi dual methods are combined, outperforms the individual methods which were the constituent parts of the combined method and the combined methods on the whole did very well in comparison to other methods. 4. The accuracy of each of the various methods depends upon the length of the forecasting horizon involved. At the conclusion of the study, the results were made available to other researchers for the purposes of verification and replication. This showed that:- 1. The calculations contained in the study were verified and found to be correct. 2. The results were also confirmed when other researchers, using the same data sets, employed different methods of measuring the developed results. 3. Other researchers, using different data series, also reinforced, in their results, the validity of such studies. Throughout this period it was still too soon to state that statistically sophisticated methods did not do better than simple methods when there was considerable randomne ss in the data. It was also shown that simple and sophisticated methods could be equally effective when applied to series which exhibited seasonal patterns. 2.3 M2-Competition In 1993, a further attempt was made to measure and develop the accuracy of various forecasting methods in the M-2 Competition (1993). This was constructed on a real time basis with a further five forecasting organisations (the data was provided by four companies and included six economic series). In this more recent study other forecasting methods, such as NaÃÆ'ƒÂ ¯ve 2, single smoothing and Dampen were included. The accuracy measure employed was based on MAPE (mean absolute percentage error). The four companies provided the experts with the actual data of past and present situations (information on the nature and prevailing business conditions was also provided to the experts). Then the participating experts had to provide forecasts for the next 15 months. After a year, the forecasting data was checked against the actual data from the companies. However the conclusions from this study were identical to those drawn from the M-Competition, in that the more sophisticated m ethods did not create more accurate forecasts than the simpler ones. The study also agreed that the conclusions drawn from previous studies were confirmed. 2.4 M3-Competition The M3-Competition (2000), involved more methods, researchers and more time series. The number of time series was extended to 3003. To reduce the demands on data storage it was decided that a minimal number of observations for each type of data would be used: 14 observations for yearly series 16 observations for quarterly series 48 observations for monthly series 60 observations for other Given the source data, the participating experts were asked to develop further forecasts as follows: 6 for yearly 8 for quarterly 18 for monthly 8 for others The given time series data did not include any data containing negative values. Thus it was expected that the submitted forecasted data should also not include negative data. Despite this requirement, it was decided that any negative values received in the forecasted data, would be set to zero. Also seven methods were also added to the submitted data received from those who used neural networks, expert systems an d decomposition to produce their forecasts. Five accuracy measures were used to analyze the data as follows: Symmetric MAPE Average Ranking Median symmetric APE Percentage Better Median RAE From the analysis of the M3- Competition, the conclusions were identical to the previous M-Competitions. However it was recognised that Theata, a new method used in the M3 competition, had out performed all other methods, and performed consistently well across both forecasting horizons and accuracy measures, suggested by Makridakis et al. (2000, p.459) 3.0 Source Data 3.1 Data format The original data came from the M3- Competition which has been provided by INSTEAD. The data was broken down into two parts, which were actual data and forecast data. The actual data was taken from the website of international institute of forecasters (M3-Competition data). This data was given as an xls. file or in an Excel spreadsheet format. Meanwhile the data was broken down into 5 parts, which were titles as Competition, M3 Year, M3 Quart, M3 Month and M3 Other. However the forecast data was provided by Michele Hibon, who was one of the authors of the M3-Competition: results, conclusions and implication. The datas format was in a DAT. file, this meant that the data was needed to be converted into xls. format in order for the forecast data to be compatible with the actual data. 3.2 Actual data As mention previously actual data was provide as an xls. file. This meant that the data could be used into calculation straight away. However the forecast data only provide the last 6 data in yearly, 8 data in quarterly, 18 data in monthly and 8 data in other. Therefore the last of each type of data would only be used. In order to copy the unsynchronised last data repeatedly, Macro tool was utilised. The code (Macro code) was used to move all the data to right hand side of the spread sheet. This meant that the following last data in each category could be easily copy. Macro code For rearrange the actual data of the M3-Competition Yearly data Sub () aa = Range(A1:BA646) For i = 1 To 646 t = 53 For h = 53 To 1 Step -1 rr = aa(i, h) aa(i, h) = Empty If rr Empty Then aa(i, t) = rr t = t 1 End If Next h Next i Range(A1:BA646) = aa End Sub Quarterly data Sub () aa = Range(A1:BZ757) For i = 1 To 757 t = 78 For h = 7 8 To 1 Step -1 rr = aa(i, h) aa(i, h) = Empty If rr Empty Then aa(i, t) = rr t = t 1 End If Next h Next i Range(A1:BZ757) = aa End Sub Monthly data Sub () aa = Range(A1:ET1429) For i = 1 To 1429 t = 150 For h = 150 To 1 Step -1 rr = aa(i, h) aa(i, h) = Empty If rr Empty Then aa(i, t) = rr t = t 1 End If Next h Next i Range(A1:ET1429) = aa End Sub Other data Sub () aa = Range(A1:DF175) For i = 1 To 175 t = 110 For h = 110 To 1 Step -1 rr = aa(i, h) aa(i, h) = Empty If rr Empty Then aa(i, t) = rr t = t 1 End If Next h Next i Range(A1:DF175) = aa End Sub 3.3 Forecasted data Forecasted data consisted of 24 forecasters which were all provided in DAT. file. The data was then converted into xls. file by opening the file through Excel. Also margins were added to separate each of the value to according cells. However the imported data still has data which overlay each other and did not match the format of the actual data. Therefore Macro was used to rearrange data to working format. The data was first rearrange to remove the overlay on each observations by demonstrate an example on macro. Then this was done repeated to set condition on macro. Meanwhile the cells which used to have the overlay values were still present. Therefore another macro was used made to remove all the empty cells. Meanwhile with AAM1 and AAM2 data, condition on macro needed to be changed as only 2184 observations were provided. At last in order for the data to be compatible with the actual data, heading for each observation was then remove by another written macro. Macro code: R earrange the overlay values Sub () For i = 2804 To 8514 Range(A i + 1 :H i + 1).Select Selection.Cut Range(I i).Select ActiveSheet.Paste Range(A i + 2 :H i + 2).Select Selection.Cut ActiveWindow.ScrollColumn = 2 ActiveWindow.ScrollColumn = 3 ActiveWindow.ScrollColumn = 4 ActiveWindow.ScrollColumn = 5 ActiveWindow.ScrollColumn = 6 Range(Q i).Select ActiveSheet.Paste ActiveWindow.ScrollColumn = 5 ActiveWindow.ScrollColumn = 4 ActiveWindow.ScrollColumn = 3 ActiveWindow.ScrollColumn = 2 ActiveWindow.ScrollColumn = 1 i = i + 3 Next i End Sub Remove all the empty cells Sub () j = 5659 For i = 2805 To j Rows(2819:2820).Select Rows(i : i + 1).Select Selection.Delete Shift:=xlUp i = i + 1 j = j 4 Next i End Sub Remove all the heading Sub () j = 3003 For i = 1 To j Rows(2819:2820).Select Rows(i : i).Select Selection.Delete Shift:=xlUp Next i End Sub Macro code (AAM1 and AAM2): Rearrange the overlay value Sub () For i = 1514 To 7224 Range(A i + 1 :H i + 1).Select Selection.Cut Range(I i).Select ActiveSheet.Paste Range(A i + 2 :H i + 2).Select Selection.Cut ActiveWindow.ScrollColumn = 2 ActiveWindow.ScrollColumn = 3 ActiveWindow.ScrollColumn = 4 ActiveWindow.ScrollColumn = 5 ActiveWindow.ScrollColumn = 6 Range(Q i).Select ActiveSheet.Paste ActiveWindow.ScrollColumn = 5 ActiveWindow.ScrollColumn = 4 ActiveWindow.ScrollColumn = 3 ActiveWindow.ScrollColumn = 2 ActiveWindow.ScrollColumn = 1 i = i + 3 Next i End Sub Delete the empty cells Sub () j = 7224 For i = 1515 To j Rows(2819:2820).Select Rows(i : i + 1).Select Selection.Delete Shift:=xlUp i = i + 1 j = j 4 Next i End Sub 3.4 Data source error By dealing with a large data set, errors could have been occurred through out data transfer from the original and tested data. In the process of prepare forecasted data, five forecasts had been found to obtain negative forecasted results. The forecasts are as followed; Robust-Trend Automat ANN Theata ARARMA SmartFcs Also some forecasts had more negative values than other. Robust-Trend was found to have the most number of negative values present in the data, with 151 negative values. The second was Automat ANN, which as the list followed. This meant that the least would be SmartFcs with one negative values presented. Therefore negative values in the five forecast were then replace by positive sign. This was the case as of the reason that the result, which obtained was nearer to result published in M3-Compettiton (2000). Method Number of data Robust-Trend 151 Automat ANN 47 Theata 19 ARARMA 4 SmartFcs 1 Table 1 Number of neg ative data in each forecasting method 4.0 SMAPE (Symmetric Mean Average Percentage Error) Concept and Calculation 4.1 Definition Symmetrical Mean Average Percentage Error (SMAPE) or Adjusted Mean Average Percentage Error, Armstrong (1985) A could be defined as: SMAPE = (1.1) Note: X- Actual value, F Forecasted value, which the sum of the total divided by number of observations Despite the similarity to MAPE (Mean Average Percentage Error), SMAPE had an advantage to Mean Average Percentage Error as this would eliminate the favour for low estimates and also there were no limits to high side, mentioned by Armstrongs Long-Range Forecasting book (1985). Meanwhile the limit of SMAPE was between 0%, which meant for perfect and 200% for infinitely bad forecast. This meant that SMAPE was to be less sensitive than MAPE to measurement errors in actual data, stated by Armstrong (1985, p. 348). However SMAPE was not totally symmetric as over-forecasts and under-forecasts were not treated equally. 4.2 Calculation In each forecast, SMAPE had been done individually according to forecasting horizon. At first, Average Percentage Mean (APE) was calculated according to each observation for 3003 observations and 2184 observations (AAM1 and AAM2) as followed: The calculation of Error, which could be defined as Error = Actual Forecast Take the Absolute Error, | Error | Calculate the sum of Actual and Forecast Divided the sum of Actual and Forecast by 2 Calculate the APE by taken the value in step 2 divided by value in step 4. In order to produced, SMAPE, the sum of value of APE in each observation were divided by the number of observations which it had been considered. Then, the SMAPE was calculated according to boundary forecasting horizon, such as 1 to 4, 1 to 6, 1 to 8, 1 to 12, 1 to 15, and 1 to 18. 4.3 Results From the SMAPE, the forecasts were ranked from best with the least error to worst with the highest. These were ranked according to the result from boundary forecasting horizon 1 to 18. This was the case as the selected error would combine all the errors in 18 forecasting horizons. This was shown in Table 2 and 3: Example result from Theata method Forecasting horizon SMAPE N 1 0.084017 3003 2 0.095669 3003 3 0.113103 3003 4 0.125112 3003 5 0.131298 3003 6 0.139994 3003 7 0.122699 2358 8 0.119834 2358 9 0.131595 1428 10 0.133898 1428 11 0.1347 1428 12 0.132214 1428 13 0.154032 1428 14 0.151862 1428 15 0.162854 1428 16 0.177043 1428 17 0.168029 1428 18 0.182731 1428 Table 2 SMAPE across the 18 forecasting horizons Forecasting Horizon 1 to 4 1 to 6 1 to 8 1 to 12 1 to 15 1 to 18 Total Per centage Error 1254.958 2069.647 2641.54 3401.817 4071.19 4824.892 N 12012 18018 22734 28446 32730 37014 SMAPE 0.104475 0.114866 0.116193 0.119589 0.124387 0.130353 Table 3 SMAPE between 18 forecasting horizons boundaries 4.4 Matching M3-Competition data In order to replicate the result from the M3-Competition, the same SMAPE, which was mention previously, was used as the measurement error on the M3 data. The obtained SMAPE result was then compare to the result published in M3-Competition, which was shown in the table 4. Rank SMAPE SMAPE(M3) 1 Theata Theata 2 Forecast X Forecast Pro 3 Forecast Pro Forecast X 4 Comb S-H-D Comb S-H-D 5 Dampen Dampen 6 RBF RBF 7 B-J automatic Theata-sm 8 Automat ANN B-J automatic 9 SmartFcs PP-autocast 10 PP-autocast Automat ANN 11 Flores-Pearce2 SmartFcs 12 Single Flores-Pearce2 13 Theata-sm Single 14 Autobox2 Autobox2 15 AAM1 Holt 16 Flores-Pearce1 AAM2 17 ARARMA Winter 18 AAM2 Flores-Pearce1 19 Holt ARARMA 20 Winter AAM1 21 Autobox1 Autobox1 22 NaÃÆ'ƒÂ ¯ve2 Autobox3 23 Autobox3 NaÃÆ'ƒÂ ¯ve2 24 Robust-Trend Robust-Trend Table 4 Comparative ranking of SMAPE between published results in the M3-Competition and those calculated in this study From the comparison, the ranking of the forecasting methods were not the same as it was expected. From the result, some methods seem to out perform better than it was expected. For example, AAM1 had moved up by 5 ranks. Also NaÃÆ'ƒÂ ¯ve2 had out performed Autobox3. Meanwhile, some methods did not perform as well, for example Theata-sm was decreased by 6 ranks. Graph 1 Comparative ranking of SMAPE between published results in the M3-Competition and those calculated in this study Graph 1 Comparative Ranking between M3-Competition and calculated result Graph 2 Matching difference between published results in the M3-Competition and those calculated in this study As mention earlier, some errors had been found in the raw data. These were the negative data in the 5 forecasting me thods. However, as all the negative values was replaced with positive values in these forecasting methods. There were two forecasting methods which produced the corresponded ranking to the original M3 SMAPE analysis. These methods were Robust-trend and Theata. But the rest which were AutomatANN, SmatFcs, and ARARMA, had mis-matched the original SMAPE by 2 ranks. Despite, the argument above, it was clear that there were other forecasting methods which had perfectly good data, produced mis-match result. As the size of the data, it was possible that errors could have occurred in various stages in the calculation, even though this had been treated with caution. For example, rounding errors could occur when the 3003 observations were used to calculate the total SMAPE in each forecasting horizon. This meant that by considered the more number of observations, the likelihood of the errors in rounding would be more noticeable. Also it was to be mention that the forecasted data was not directly obtained from the M3-Competion, as the data was not published in the paper of M3-Competition (2000). Therefore it was fair to that the forecasted data could not have been the identical one which was used in the M3-Competition. However, this data was provided by Michele Hibon. Therefore, despite the difference in result, both results produced the same conclusions and also the data was obtained from a reliable source. This was also proven by the small deviation in the plot of both results, as Root Mean Square value was 0.9012. 5.0 ROC (Rate of Change) Concept and Calculation 5.1 Definition The Rate of change method is based on the Centred Forecast-Observation diagram for change developed by Theil (1958) and subsequently reported by Gilchrist (1976) and extended by John (2004, p.1000). In 1968, the diagram of actual and predicted changes was a graphical picture of turning point error, mentioned by Theil (1958, p. 29). This was represented on the horizontal axis as actual change, vertical axis as predicted change (observation change), and with a line of perfect forecast which was 45 ° to the origin. The diagram is divided into four quadrants, the second and fourth quadrants represent turning point errors. These are determined by the sign of the preceding actual change in the same variable. Meanwhile, the other two quadrants were divided by the line of perfect forecast into equal areas of overestimation and underestimation of changes. Centred Forecast-observation diagram, Gilchrist (1976 p. 223) was used to explain more about the characteristics of forecasting. The diagram is split into six quadrants. This was also mentioned by John (2004, p. 1001). John (2004) uses the diagram as a chart with the forecast series on the y-axis and actual series on the x-axis. For each actual series a pair would be determined as: Z/Ai = Ai + 1 Ai (2.1) Then for each forecast series the pair would be: Z/Oi = Ôi + 1 Ôi (2.2) Note: A pair of actual values, Ô pair of forecast values Then each of the individual pairs of Z/Ai and Z/Oi could be determined and plotted. As mentioned previously the chart was divided into six quadrants, the quadrants start in a clockwise direction from the forecast pair axis in positive. The quadrants are as follows: Sector 1 Overestimate of positive change Sector 2 Underestimate of positive change Sector 3 Forecast decrease when an increase in the actual occurs Sector 4 Overestimate of negative change Sector 5 Underestimate of positive change Sector 6 Forecast increase when a dec rease in the actual occurs Graph 3 Z/O Z/A chart, by John (2004) In each quadrant the number of errors could then be determined. However, the magnitude of the errors in each quadrant was not as equal as each other. The error was then divided into two distinct types, normal error and quadrant error, as recognised by John (2004). Normal error was given to the pair which had the same direction (sign of change), and measured as: Normal error = (|Z/Ai| | Z/Oi|)  ² (2.3) When the direction (sign of change) of Z/Ai and Z/Oi was different, for example Z/Ai was positive and Z/Oi was negative. This was known as the quadrant error, and measured as: Quadrant error = (3|Z/Ai| + | Z/Oi|)  ² (2.4) This was devised by John (2004), since it could be argued that even if a forecast failed to recognize the correct magnitude of change, it would be expected that it should at least recognise the direction of change. 5.2 Result In each forecasting horizon, all the Z/Ai and Z/Oi were calculated and allocated to correct position on the chart. Then the magnitude of errors in each quadrant was calculated. Then total normal error, total quadrant error, and total error for individual forecasting horizon were calculated, as shown in Single method in table 5. Therefore by adding all the total errors from each forecasting horizons, the total magnitude of error could be determined for each forecast methods. Results from the Single method Forecasting horizon Normal Error Quadrant Error Total Error 1 1,669,235,393 4,762,819,858 6,432,055,251 2 3,090,915,225 7,819,879,985 10,910,795,211 3 3,809,937,394 10,672,690,256 14,482,627,650 4 4,684,287,634 18,397,987,871 23,082,275,504 5 4,736,994,108 16,554,461,103 21,291,455,210 6 4,360,306,528 20,049,475,146 24,409,781,674 7 2,373,819,526 10,918,011,198 13,291,830,724 8 2,532,396,960 11,776,071,077 14,308,468,037 9 1,792,817,169 4,172,721,813 5,965,538,981 10 1,801,230,076 4,202,413,276 6,003,643,352 11 1,378,997,041 3,858,393,264 5,237,390,306 12 2,121,625,068 4,529,366,741 6,650,991,810 13 1,386,801,507 5,628,793,890 7,015,595,396 14 1,798,343,821 7,168,856,951 8,967,200,772 15 2,150,461,735 8,319,292,944 10,469,754,679 16 2,980,716,589 4,849,696,617 7,830,413,206 17 2,754,027,689 6,003,654,290 8,757,681,979 18 2,359,952,669 6,596,002,482 8,955,955,151 Table 5 ROC Error on Single across the 18 forecasting horizon 5.3 Rank ROC Result In order to rank the forecasting methods, any bias, caused by the different number of observations in each method, could be eliminated by implied Error per observation. The results from the 24 forecasting methods were as followed: Methods Error per observation Rank Single 67,953,198 1 Theata 72,562,195 2 Comb S-H-D 72,944,577 3 Forecast X 76,111,849 4 Flores-Pearce2 76,589,909 5 SmartFcs 79,120,086 6 Theata-sm 81,199,075 7 Dampen 83,862,347 8 Forecast Pro 83,931,783 9 AAM1 86,621,738 10 NaÃÆ'ƒÂ ¯ve2 87,290,813 11 AAM2 88,513,888 12 B-J automatic 89,875,446 13 RBF 89,950,117 14 Automat ANN 92,188,194 15 PP-autocast 94,661,129 16 Holt 94,735,528 17 Autobox3 102,203,567 18 Flores-Pearce1 115,985,051 19 Autobox1 120,046,101 20 Robust-Trend 122,657,044 21 Autobox2 134,148,029 22 ARARMA 237,620,699 23 Winter 6,602,739,469 24 Table 6 Ranking of ROC Results This was also applied to the other calculated errors such as normal errors and, quadrant errors. In addition, the numbers of observed data points, which were over-estimates, under-estimates, quadrants and correct, have been normalised and listed. The list of the forecasting methods performance is shown in table 7. Method Correct Over-estimates Under-estimates Quadrants Normal Errors Quadrant Errors Total Errors Best NaÃÆ'ƒÂ ¯ve2 Single Robust-Trend Theata Single Comb S-H-D Single AAM1 Automat ANN Autobox3 Dampen Automat ANN Theata Theata Single NaÃÆ'ƒÂ ¯ve2 Autobox1 Comb S-H-D NaÃÆ'ƒÂ ¯ve2 Single Comb S-H-D Forecast X B-J automatic Holt Forecast Pro Forecast X Flores-Pearce2 Forecast X Flores-Pearce1 Theata-sm ARARMA Single Theata-sm Forecast Pro Flores-Pearce2 SmartFcs Theata Winter Forecast X AAM1 Dampen SmartFcs AAM2 Autobox2 RBF PP-autocast AAM2 Forecast X Theata-sm B-J automatic Forecast X SmartFcs B-J automatic SmartFcs SmartFcs Dampen Flores-Pearce2 Dampen Flores-Pearce2 Flores-Pearce1 Flores-Pearce2 B-J automatic Forecast Pro Forecast Pro Flores-Pearce2 Flores-Pearce1 RBF Theata Theata-sm AAM1 ARARMA PP-autocast Autobox2 Winter Comb S-H-D Holt NaÃÆ'ƒÂ ¯ve2 Theata Flores-Pearce1 Forecast Pro ARARMA RBF RBF AAM2 Autobox2 Comb S-H-D PP-autocast NaÃÆ'ƒÂ ¯ve2 Dampen Autobox2 B-J automatic Comb S-H-D Forecast Pro Comb S-H-D Holt Forecast Pro PP-autocast RBF Autobox1 SmartFcs Forecast X Theata-sm PP-autocast AAM1 Automat ANN Winter RBF Theata-sm SmartF cs Autobox3 AAM2 PP-autocast Autobox3 Autobox1 NaÃÆ'ƒÂ ¯ve2 Flores-Pearce2 B-J automatic NaÃÆ'ƒÂ ¯ve2 Holt Automat ANN Winter Automat ANN Autobox2 Holt Autobox1 Autobox3 Theata-sm ARARMA Dampen Autobox3 Flores-Pearce1 Autobox3 Flores-Pearce1 Dampen Holt B-J automatic Automat ANN Robust-Trend Automat ANN Autobox1 PP-autocast Autobox3 Theata Robust-Trend Autobox1 ARARMA Robust-Trend RBF Robust-Trend Single Autobox1 Autobox2 Robust-Trend Autobox2 Holt AAM1 AAM2 AAM1 ARARMA Flores-Pearce1 ARARMA Worst Robust-Trend AAM2 AAM1 AAM2 Winter Winter Winter Table 7 ROC Result per observation 6.0 Comparative analysis between SMAPE and ROC From the analysis, it could be argued that conclusions form the M-Competition could still be valid, since it had been proved that sophisticated or complex methods did not out perform the simpler ones. This became clear as Single out performed all of the selected methods. Single was based on single exponential smoothing, considered to be a simple method. Also the explicit methods, such as Robust-Trend and Winter came last in the competition. Also it was proven that different accuracy measures would produce a different relative ranking of the various methods. By taking a comparative analysis of the two measurement error methods, some forecasting methods did perform better in ROC. An example of a significant change was between Single and NaÃÆ'ƒÂ ¯ve2. These methods improved by 11 ranks when compared with SMAPE. In addition, combined methods did still out perform individual methods. As it showed that Winter, which was explicit trend model did worst in all of the method s. Furthermore, the worst combined method was Flores-Pearce1, which was 19 in ROC and 16 in SMAPE. For all the agreement mentioned above, ROC and SMAPE were still different methods of error measurement and did produce different results. In ROC, errors could be divided into normal and quadrant. This gives the researchers more information on how each forecasting methods did and also indicates where and how improvements could be justified. Also from this extended information of the measurement error, performance on each forecasting horizon in each forecasting methods could be compared against normal and quadrant errors. As mention in the definition of ROC, normal error could be divided into two types, over-estimates and under-estimates. This information would be critical on the improvement of forecasting methods. In this study, the number of error points in each type was calculated. An example of this could be seen in Single (Table A4) and Winter (Table A5) [ROC results] in the a ppendix. From this sample, the table also showed that correct data point were also obtained form the ROC. Despite the advantage of this information, quadrant errors were still the most dominant in magnitude of the calculated error in each forecasting method. As in NaÃÆ'ƒÂ ¯ve 2, which had the most correct data points with 90, still came 11 in the overall rank. Also this could be supported by Winter which had more correct data points and was still overall the worst performance. If the over-estimates and under-estimates are considered and separated into positive change (Sector 1, S1 and Sector 2, S2) and negative change (Sector 4, S4 and Sector5, S5) then further comments could then be made on each individual observation in each forecasting method. Thus it could be argued that despite the fact that SMAPE and ROC produce the same conclusion on the overall performance of the type of forecasting methods, one major difference could be identified. In that it was true to say that ROC could be used as to better understand the errors caused in each forecasting methods. Such analysis is not possible with other methods of error measurement such as SMAPE. The ranking of each of the forecasting methods are listed in table 8: Rank SMAPE ROC 1 Theata Single 2 Forecast Pro Theata 3 Force X Comb S-H-D 4 Comb S-H-D Force X 5 Dampen Flores-Pearce2 6 RBF SmartFcs 7 B-J automatic Theata-sm 8 Automat ANN Dampen 9 SmartFcs Forecast Pro 10 PP-autocast AAM1 11 Flores-Pearce2 NaÃÆ'ƒÂ ¯ve2 12 Single AAM2 13 Theata-sm B-J automatic 14 Autobox2 RBF 15 AAM1 Automat ANN 16 Flores-Pearce1 PP-autocast 17 ARARMA Holt 18 AAM2 Autobox3 19 Holt Flores-Pearce1 20 Winter Autobox1 21 Autobox1 Robust-Trend 22 NaÃÆ'ƒÂ ¯ve2 Autobox2 23 Autobox3 AR ARMA 24 Robust-Trend Winter Table 8 Comparative ranking between SMAPE and ROC Graph 4 Comparative ranking between SMAPE and ROC 7.0 Discussion and Conclusion This study has replicated the results from the M3-Competition, despite some of the mis-matching of the ranking of methods. Also, the Rate of Change (ROC) concept has been introduced as another method of error measurement. From the results of ROC analysis, many characteristics of the forecasting methods have been better understood. In the analysis, AAM1 and AAM2 had seemed to perform better in most of the categories, but when the number of observations was taken into account their true ranking was obtained. After normalising the results to per observation in table 7, Single and Theata did perform as well as expected. Single had the least number of under-estimated errors and Theata had the least number of quadrant errors. However, Robust-trend had the least over-estimates. This was not expected since overall it was one of the worst performing forecasts in both SMAPE and ROC. But, it could be argued that it had presented the most number of under-estimated errors and therefore this mea nt that the total normal errors would be much higher than the other methods. Also from the ROC results per observation, it was evident that AAM1 and AAM2 were the worst methods in over-estimates, under-estimates and quadrants. In ROC analysis, the number of correct forecast could also be accumulated. This showed that NaÃÆ'ƒÂ ¯ve2 had 90 correct values, whilst the others had only 20 correct values. Also the Winter had obtain 1 correct value, thus it was regarded as the worst performer in ROC. However from the results, it was true to say that quadrant values would still be the most dominant on the performance of the forecast. This meant that the higher the quadrant, the less likely for the forecasting method to perform well. For example, NaÃÆ'ƒÂ ¯ve2 had more correct values than Single. But Single did have lesser quadrant values. Therefore Single would out perform Naive2, [see the results recorded by rank, table 8]. Also ROC analysis showed that the resu ltant information could be used to improve the accuracy of forecast. [this was mentioned in the comparative of SMAPE and ROC]. In ROC, the trend of the plot in each forecasting horizon could explain the performance of each method. A good example would be by taking ROC plots of the forecasting horizons from the best and worst methods respectively, [Single and Winter]. The Single plot was taken from the 11th forecasting horizon, and the 9th forecasting horizon was taken from Winter data. By comparing the two plots, the differences in the distribution of errors can be clearly seen. In Single, the data points are mainly distributed in the sectors 1, 2, 4 and 5. This means that the errors which were obtained would under-estimates and over-estimates [normal errors]. Whilst, in Winter a large numbers of data points fell in sector 6, which is quadrant error. This means that a greater error was created from the Winter method than from the Single method. In addition, the Winter plot showed that there were more data points in sectors 1 and 4 than in sectors 2 and 5. This meant that the method tended to over-estimate the forecasts. Also it was still to be noted that there were some minority of under-estimates. Also the same observation could be used on Single plot. There were more data points in sector 1 than sector 2. This meant that the method tended to over-estimate in a positive direction. However, this was different in negative direction since there were more data points in sector 5 than in sector 6. Therefore, the method tended to under-estimate in the negative direction. From these observations, the analysis on each method could be sent back to each forecasting method as advice. Then this could be used to improve the methods for better forecasts in the future. Therefore, it was clear that ROC has a much greater advantage in analysing forecasts than other measurement methods. However, more work is needed to produce an ROC analysis than is required for a SMAPE analysis. Also this means that more care is needed since the calculations increase as the number of forecasting horizons in each forecasting method increases. Conclusion This study has replicated the results of the M3 competition using SMAPE as the means of error measurement and undertaken a further analysis on the same data using the ROC method developed by John (2004). The study has proven that the conclusions of the M3-Compeition are still valid. However, it could be argued that re-analysis of SMAPE was not totally reliable due to some mis-matching in the ranking performance of the original SMAPE in M3-Competition and calculated SMAPE from this study. Also there were data errors found in the source data. In addition as mentioned earlier, rounding error could have occurred in the calculations, [as suggested by Chatfield C. (1988, p. 28) when he said that There are obvious dangers in averaging accuracy across many time series]. Meanwhile, this could be argued that the calculated SMAPE produced the same conclusions as previous SMAPE results in M3-Competition. In addition the same conclusion was also recognised by ROC. This meant that all the res ults tested produced one critical conclusion which was that Simple method did outperform many of the more sophisticated methods, which included individual and combined methods. Also in general, the combined methods did perform better than the individual ones. From this analysis, the use of the Simple method could be now more appreciated since it gave an equal or greater result of accuracy when compared to the more complex forecasting methods. Also this would broaden the uses of the forecasting method, which despite the fact that accuracy was just one indicator of performance beside cost, ease of use, and ease of interpretation. Also this was mentioned by Chatfield (1998, p. 21), that simple methods were considered likely to be easily understood and implemented by managers and other works that used these forecasted results. Reference: Armstrong J.S. 1985 Testing Outputs, Long-Range Forecasting from crystal ball to computer (2nd Edition) Wiley-Interscience Publication, 348 Chatfiled C. 1988, What is the best method of forecasting?, University of Bath, Journal of Applied Statistics, Vol. 15, No. 1, Gilchrist W. 1976, Statistical forecasting, Wiley-Interscience Publication, 222-225 John E. G., 2004Comparative assessment of forecasts, International Journal of Production Research, Vol. 42, NO. 5, 997-1008 Makridakis S. et al. 1979 Accuracy of Forecasting: An Empirical Investigation, Spyros Makridakis and Michele Hibon, Journal of the Royal Statistical Society. Series A (General), Vol. 142, No. 2, 1979, 97-145 Makridakis S. et al. 1982, The Accuracy of Extrapolation (Time series) Methods: Result of a Forecasting Competition, Journal of Forecasting, 1, 111-153 Makridakis S. et al. 1993, The M2-Competition: A real-time judgmentally based forecasting study, International Journal of Forecasting, Vol. 9, 5-22 Makridakis S. et al. 2000, The M3-Competition: results, conclusions and implications, International Journal of Forecasting, Vol. 16, 451-476 Theil H. 1958, Economic forecasts and policy, North-Holland Publishing Company, Amsterdam, 29-30 Website: M3-Competition data, Available at: https://forecasters.org/data/m3comp/M3C.xls Marco code, Available at: https://club.excelhome.net/thread-346575-1-1.html [Accessed: 8 January 2010]. Equations: 1.1 Symmetric Mean Absolute Error. From: Appendix A, The M3-Competition: results, conclusions and implications, Spyros Makridakis, Michele Hibon, International Journal of Forecasting, 2000, Vol. 16, 461 2.1 Actual series pair, 2.2 Forecast series pair, 2.3 Normal Error, and 2.4 Quadrant Error. From: Rate of change method (ROC), Comparative assessment of forecasts, E. G. John, International Journal of Production Research, 2004, Vol. 42, NO. 5, 1000-1001