The Study Group proposed a way forward based on Bayes’ theorem for the marginal distribution of damage and found an analytical expression for the damage distribution function. However, the expression is an in- tegral that needs to be evaluated numerically and the Gaussian-Hermite quadrature was proposed to carry out the calculations. The approach seems plausible to be included in the existing models and the additional computational load is estimated as to be marginal relating to the current computational demands.

We developed two independent path distance metrics and while one suggested that sequential storms within a given hurricane season are more likely to follow each other than any other pair of storms within that season, this conclusion was not supported by the other metric.

Some considerations of how local and large scale air pressure gradients might affect hurricane paths were considered. A point vortex model in the presence of a steering flow field was developed and used to simulate the path of two time displaced vortices. In order for the vortices to follow each other they had to be relatively weak compared to the steering flow field. At realistic vortex strength, the trajectories became chaotic.

In summary, our metrics provided conflicting evidence towards the no- tion of hurricane track memory. A large-scale steering flow field did not appear to provide sufficient explanation for hurricanes following each other, though this does not preclude hurricane track memory being due to localised physical changes following a large storm.

The aim of optimization is to distribute collateral value to the connected loans, in a way to minimize amount of LLP. It can be done easily on a one loan level, but creating a universal algorithm that is applicable to all loans and all collaterals on the Bank portfolio level, is the goal to be achieved.

consisting of 35 risky assets. The first one uses periodically updated optimal weights from standard Markowitz/Sharpe portfolio theory. The second strategy removes a fixed number of assets that have highest positive correlation with the rest of the portfolio. Both approaches perform better (have larger Sharpe ratio) than the existing strategies.

We introduce a criteria to test if a dynamic pricing mechanism under investigation is a g-pricing mechanism. This domination condition was statistically tested using CME data documents. The result of test is significantly positive. We also provide some useful characterizations of a pricing mechanism by its generating function.

Here, we address the issue of where to locate a new car center based on a limited dataset. A method for distilling aggregate population information down to sub-regions is developed to provide estimates that feed into the optimization algorithm.

Two measures were used in the optimization: (i) total market share and (ii) total attractiveness. Total market share optimization is found to lead to placing the center close to competitors, while total attractiveness optimization is found to lead to placing the center closer to centroid of the population.

The Study Group decided to focus on the sub-problem of finding the relation between the spending on science and the quality of science itself. As a result, we have developed two independent methodologies. The most promising one is based on the theory of time-delay systems, which allows capturing effects of the time-lag between the use of funds and the results related to scientific work. Moreover, the methodology gives an opportunity to seek the optimal spending scenario that would fulfill some prescribed constraints (e.g. it would minimize costs and at the same time remain above a desired level of quality of science).

The second methodology is premised on Stochastic Frontier Analysis and it can be applied to determine the form of relation between the amount of financing and the results of scientific work. It offers considerable advantages for analyses of several forms of relation at once (production functions) and for a suitable choice of the best one.

Both methods are promising, however, additional work is necessary to apply them successfully to some real-life problems.

(1) Further partitioning of output load and prices from an ESE into off-peak, peak and weekend periods to determine the subsequent effect on earnings.

(2) The diagnosis of simulated load paths. As simulated load was not supplied for all engines, the diagnostics developed in this report did not include an analysis of load.

(3) The building of a response surface to capture the interaction between temperature, load and price.

(4) Examination of the convergence behaviour of an ESE. Convergence in this context means the determination of the minimum number of load and price paths required from a simulator in order to return expected profiles that conform to industry expectations. This would involve the sequential testing of an increasing number of simulated paths from an ESE in order to determine the number required.

In conclusion, it is important to understand that each of the simulators that were diagnosed in this study were criticised according to industry expectations, and to the degree that the diagnostics employed here reflect those expectations. In fact, all simulators will attract criticism given that they are calibrated on historical data and are expected to generate future prices for market conditions that are unknown. The mark of an appropriate ESE is that the future load and pricing structure it generates is not too much at variance with industry expectations. A critical function of a simulator is for it not to overestimate or underestimate load and prices such that the risk metrics used to govern earnings risk faced by an electricity retailer are compromised to the extent that their book is either grossly over-hedged or under-hedged.

The study group concluded that while ‘prediction’ of price in any meaningful sense was not viable, a model for scenario analysis could be realised. The model did not incorporate all of the factors of interest, but did model important time lags in the response of market players’ future behaviour to current oil prices. Consideration of the optimisation of supply through new capacity in the telecoms industry led to a generalisation of the standard Cournot-Nash equilibrium. This indicates how an output-constrained competitive market might operate. It enables identification of different pricing regimes determined by the level of competition and the resource limitations of particular supplier firms. Two models were developed sufficiently to enable simulation of various conditions and events. The first modelled oil price as a mean reverting Brownian motion process. Strategies and scenarios were included in the model and realistic simulations were produced. The second approach used stability analysis of an appropriate time-delayed differential equation. This enabled the identification of unstable conditions and the realisation of price oscillations which depended on the demand scenarios.

Currently many actuaries may assume that the volatility of property assets is between those of equities and bonds, but without quantifying it from real data. The challenge for the Study Group is to produce a model for estimating the volatility or uncertainty in property asset values, for use in portfolio planning. The Study Group examined contexts for the use of volatility estimates, particularly in relation to solvency calculations as required by the Financial Services Authority, fund trustees and corporate boards, and it proposed a number of possible approaches. This report summarises that work, and it suggests directions for further investigation.

Genus had provided us with sample data, consisting of just over 12 years worth of monthly returns on a universe of 60 stocks, along with time series of 34 factors for each of the stocks. Using these data, the approach was to build software (MATLAB) models for:

1. ranking the stocks based on factor information;

2. implementing a trading strategy based on a stock ranking and assessing the performance of a given trading strategy by looking at measures such as hit ratio, information ratio and spread.

The IPSW team implemented a simplified trading strategy of selling the entire portfolio each month, and using the proceeds to invest equally in the top 20% of stocks as given by the computed ranking. They also implemented the following measures of portfolio performance: excess return, hit ratio and information ratio.

The essential goal of the credit institution is to minimize their losses due to default. By default we mean any event causing an asset to stop producing income. This can be the closure of a stock as well as the inability of an obligor to pay their debt, or even an obligor's decision to pay out all his debt.

Minimizing the combined losses of a credit portfolio is not a deterministic problem with one clean solution. The large number of factors influencing each obligor, different market sectors, their interactions and trends, etc. are more commonly dealt with in terms of statistical measures. Such include the expectation of return and the volatility of each asset associated with a given time horizon.

In this sense, we consider in the following the expected loss and risk associated with the assets in a credit portfolio over a given time horizon of (typically) 10 to 30 years. We use a Monte Carlo approach to simulate the loss of a portfolio in multiple scenarios, which leads to a distribution function for the expected loss of the portfolio over that time horizon. Second, we compare the results of the simulation to a Gaussian approximation obtained via the Lindeberg-Feller Theorem. Consistent with our expectations, the Gaussian approximation compares well with a Monte Carlo simulation in case of a portfolio of very risky assets.

Using a model which produces a distribution of expected losses allows credit institutions to estimate their maximum expected loss with a certain confidence interval. This in turn helps in taking important decisions about whether to grant credit to an obligor, to exercise options or otherwise take advantage of sophisticated securities to minimize losses. Ultimately, this leads to the process of credit risk management.

Three subgroups were formed, and each developed a different approach for solving the problem. These were the Portfolio Selection Algorithm Approach, the Statistical Inference Approach, and the Integer Programming Approach.

In the process of constructing the default tariff, IPART assumes that the cost of purchasing energy is equal for all retailers. IPART also makes no allowance for hedging costs, which will vary depending upon the NSLP of the electricity retailer. If one retailer has more NSLP volatility than other retailers, their hedging costs for default customers will increase. Under the current default tariff structure, these increased hedging costs become an unrecoverable expense.

The aim of this project is to explore the volatility of Integral Energy’s NSLP, relative to that of other retailers, with a view toward developing a risk multiplier that accurately and reliably quantifies the volatility differences between NSLPs.