In this report, we consider how errors may arise directly from the images recorded by the cameras, due to both occlusion of people and image distortion due to a fisheye lens. We also develop a statistical model of human counting errors and attempt to estimate human accuracy from data. Finally, we attempt to relate human and computer accuracy on the basis of simplifying statistical approximations.

Sensors record three-axis accelerometer, gyroscope and magnetometer data at high frequency (100 Hz) and GPS data (position in three-dimensional space) at low frequency (~ 1 Hz). The goal is to combine these measurements into a trajectory of the point holding the sensors. The trajectory, consisting of the time profiles for position, orientation, velocity and acceleration, has to be suitable for performance analysis and animation (so needs higher time resolution and accuracy than the GPS data). Computations are performed in a post-processing step, not in real-time.

Two algorithms to incorporate the GPS data are proposed. The first is a simple generalization of the prior solution provided by the problem presenter. It always produces a result quickly and avoids drift. However, it relies on heuristically tuned "gains". The head orientation is not guaranteed to be correct, which is likely to have knock-on effects on speed and acceleration.

The second approach applies a linear Kalman smoother iteratively. This algorithm exploits the post-processing nature of the computations taking into account past and future measurements in an optimal manner. It avoids heuristic gains and generates covariance matrices that give an estimate of the error in the results.

Further investigations looked into the potential of wavelet smoothing the measurements and potential applications for Android devices.

Addressing the questions raised by TalkTalk regarding DLM requires three complementary types of modelling: physical modelling of the electrical characteristics of ADSL lines (these are existing copper telephone lines); information modelling of the associated communication channel, taking into account the levels of noise and the coding of ADSL signals; and finally statistical modelling, which enables data gathered from the exchange to be used to construct relationships between key variables and to identify lines that might be faulty or on poor profiles. The work of the Study Group 2010 contributed mainly to developing the statistical modelling. The report summarises the progress that was made and the opportunities for developing these techniques further.

One of the key measures that assure the model reliable work is the prevention from cyber attacks that will block the available e-gov services.

In order to achieve business continuity of these services, a certain amount of funding has to be invested. The correct spending of these funds will assure external interventions block or repairing after passed cyber attacks.

This resulted in an heuristic approach, which we presented at the end of the

week in the form of a protoype tool. In this report we give a detailed account of the ideas we used, and describe possibilities to extend the approach.

limited memory, where the wireless communication between the processors can be highly unreliable. For this setting, we propose a number of algorithms to estimate the number of nodes in the network, and the number of direct neighbors of each node. The algorithms are simulated, allowing comparison of their performance.

In this paper, we present a linear programming model for maximizing the amount

of decentralized power generation while respecting the load limitations of the

network.

We describe a prototype showing that for an example network the maximization

problem can be solved efficiently. We also modeled the case were the power consumption and decentralized power generation are considered as stochastic variables, which is inherently more complex.

It is made a description of optical principles, which are used to obtain 3D information from 2D information. The main interest is focused, around the problems for the intelligent matching between segments of photographs, which are taken without constrains. The automation of such a matching task, is so useful in measuring solids, where the high accuracy, and also the avoidance of human aid are desirable. Solutions, which were proposed in the Industry and Mathematics Workshop, and procedures of public domain for editing photographs, are proposed as complementary.

Three aspects were considered, with implementation schemes that incur minimal protocol changes over systems in existence.

1. The present scheme is writing to the pacemaker and echoing each bit back within a 2ms frame. If any echoed bit is not in agreement with that sent, the message is re-transmitted from the start of the block. Because the return link is poor in terms of signal to noise ratio, echoing bits up this link degrades system performance, particularly as the noise power increases. An ARQ (autamatic repeat request) scheme is suggested as a solution to this problem.

2. When data is to be read from the pacemaker, reply is via two 4ms frames each of 6 bits. A CRC (cyclic redundancy check) is performed on the returned data (8 bits) which checks for three or less errors, and the redundancy bits are appended to the end of the 9 bits. If any errors are detected, a retransmission is required. Using a FEC (forward error correction) scheme to correct errors, together with a CRC to check the integrity of data, throughput can be significantly improved especially in a noisy environment. The scheme we suggest is to encode data bits plus CRC with a 23,12 Golay code, and send the data using four by 4ms frames. The Golay code suggested is a perfect code, triple error correcting.

3. The final aspect we considered is the possible system performance improvement using soft decision quantization at the programmer of received data. This system gives gains of the order of 1.75 dB over current practice.

The problem posed to the Study Group is the specification of a specialized evaluator which can be used when the computation is indeterminate in detail, but does in fact contain enough information to produce a solution.

the classical M/ G/ c queueing model.

The state-of-the-art methods use optimisation to find the seismic properties of the rocks, such that when used as the coefficients of the equations of a model, the measurements are reproduced as closely as possible. This process requires regularisation if one is to avoid instability. The approach can produce a realistic image but does not account for uncertainty arising, in general, from the existence of many different patterns of properties that also reproduce the measurements.

In the Study Group a formulation of the problem was developed, based upon the principles of Bayesian statistics. First the state-of-the-art optimisation method was shown to be a special case of the Bayesian formulation. This result immediately provides insight into the most appropriate regularisation methods. Then a practical implementation of a sequential sampling algorithm, using forms of the Ensemble Kalman Filter, was devised and explored.

El problema planteado se enmarca en una disciplina llamada monitoreo inteligente de procesos (MIP). La misma consiste en un conjunto de herramientas informáticas cuyo objetivo es la detección y diagnóstico de eventos anormales en procesos. Dentro de la industria petroquímica el diagnóstico temprano de eventos anormales es de vital importancia por la peligrosidad de los procesos involucrados. Es ilustrativo mencionar que las pérdidas anuales asociadas a los eventos anormales en la industria petroquímica de Estados Unidos ascienden a los 20.000.000.000 de dólares.

Los procesos de interés son procesos continuos que operan en estado estacionario. Los mismos son controlados por operadores en salas de control donde, desde una terminal de PC, pueden obtener información on-line de diferentes variables de proceso como así también, actuar sobre el proceso variando la apertura de válvulas. Además, existen sistemas de control automático que regulan la apertura de ciertas válvulas de control para estabilizar el proceso. El MIP pretende brindar soporte a los operadores en la tarea de diagnóstico de eventos anormales.

El método de diagnóstico de fallas planteado se basa en el seguimiento del signo de la desviación de cada variable (perturbación) como consecuencia de una determinada falla. En el paper adjunto, se muestra una técnica para determinar, a partir de las ecuaciones que describen el comportamiento de un proceso, el signo de la perturbación de cada variable frente a diferentes fallas que pueden ocurrir en un proceso dado.

El método de diagnóstico consiste en asignarle al vector de perturbaciones obtenido a partir de la medición actual de las variables de la planta, alguno de los vectores precalculados asociados a las fallas conocidas. Esto plantea dos problemas distintos. Por un lado la necesidad de automatizar la generación del vector de perturbaciones a partir del modelo analítico del proceso y por otro determinar el signo de la perturbación de cada variable a partir de la caracterización de la serie temporal y de la situación actual del proceso.

El primer problema está planteado en el paper adjunto y requiere de cierto conocimiento de los procesos involucrados y de teoría de grafos. Los interesados en el mismo pueden referirse dicho paper. El segundo es más abstracto y sólo requiere de conocimiento de series temporales. A continuación se da una breve introducción al mismo. Para determinar el signo de la variación de una variable perturbada es necesario tener en cuenta dos características del sistema en estudio. La primera es la variabilidad natural del mismo que nos permite saber si la perturbación medida es significativa o es parte del ruido natural del proceso.

La segunda es el tiempo característico de los procesos asociados a dicha variable que nos indica cuanto tiempo debemos esperar para que la perturbación se haga evidente en la medición.

El segundo problema a resolver consiste en la caracterización de las series temporales de las variables medidas para poder determinar la presencia y el signo de una perturbación en cada una de ellas. Para ello se contará con datos reales de una planta petroquímica tomados a intervalos cercanos a un minuto durante un periodo de 7 años. La elección de cuál de los dos problemas se analizará quedará sujeta al interés de los participantes.

The question posed by BT concerned ray counting on the assumptions that:

(i) rays were subject to a reflection coefficient of about 0.5 when bouncing off a house wall and

(ii) that diffraction at corners reduced their energy by 90%. The quantity of particular interest was the number of rays that need to be accounted for at any particular point in order for those neglected to only contribute 10% of the field at that point; a secondary question concerned the use of rays to predict regions where the field was less than 1% of that in the region directly illuminated by the antenna.

The progress made in answering these two questions is described in the next two sections and possibly useful representations of the solution of the Helmholtz equations in terms other than rays are given in the final section.

With regard to the main objective, the study group focused on the exploration methods based on hash values. It has been indicated that in the case of tight constraints, suggested by PWPW, it is not possible to provide any method based only on the hash values. This observation stems from the fact that the high probability of bit corruption leads to unacceptably large number of broken hashes, which in turn stands in contradiction with the limitation for additional storage space.

However, having loosened the initial constraints to some extent, the study group has proposed two methods that use only the hash values. The first method, based on a simple scheme of data subdivision in disjoint subsets, has been provided as a benchmark for other methods discussed in this report. The second method ("hypercube" method), introduced as a type of the wider class of clever-subdivision methods, is built on the concept of rewriting data-stream into a n-dimensional hypercube and calculating hash values for some particular (overlapping) sections of the cube.

We have obtained interesting results by combining hash value methods with error-correction techniques. The proposed framework, based on the BCH codes, appears to have promising properties, hence further research in this field is strongly recommended.

As a part of the report we have also presented features of secret sharing methods for the benefit of novel distributed data-storage scenarios. We have provided an overview of some interesting aspects of secret sharing techniques and several examples of possible applications.

There are more than 500 alternative operators and 63 different KPIs that measure the quality of services. The Office of Electronic Communications (UKE) receives periodic reports for each operator with specific data which can be arranged in one table. This table has a lot of cells that are not filled and its structure may differ in time, which poses the problem of how to compare alternative operators and indicate eventual discrimination. Proposed was the concept of defining discrimination. This approach is based on some additional parameters, which were determined, however, they have not been collected so far.

The main challenge was to design aggregated indicators measuring the quality of services rendered by the wholesale telecommunication operator to alternative operators. The indicators must be comparable, i.e. they should indicate whether some AOs are favoured or discriminated.

As a result, two different methods of computing aggregated indicators were proposed. The first one, Principal Component Analysis, is based on the reduction of data dimension, which facilitates further analysis. It shows whether some AOs are treated in a different way, when compared with others. This may indicate discrimination.

The second method aggregates all values of KPIs and assigns a real number to each alternative operator. Subsequently, rankings of AOs treatment can be set up. This method enables detection of a discriminated operator and was tested in simulations.

While creating a central database of patients, the CSIOZ wishes to provide statistical information for selected institutions. However, there are some plans to extend the access by providing the statistics to researchers or even to citizens. This might pose a significant risk of disclosure of some private, sensitive information about

individuals. This report proposes some methods to prevent data leaks.

One category of suggestions is based on the idea of modifying statistics, so that they would maintain importance for statisticians and at the same time guarantee the protection of patient's privacy.

Another group of proposed mechanisms, though sometimes difficult to implement, enables one to obtain precise statistics, while restricting such queries which might reveal sensitive information.

Understanding device characteristics is critical for the design of MOSFETs as part of design tools for integrated circuits such as SPICE. Current methods involve the numerical solution of PDEs governing electron transport. Numerical solutions are accurate, but do not provide an appropriate way to optimize the design of the device, nor are they suitable for use in chip simulation software such as SPICE. As chips contain more and more transistors, this problem will get more and more acute.

There is hence a need for analytic solutions of the equations governing the performance of MOSFETs, even if these are approximate. Almost all solutions in the literature treat the long-channel case (thin devices) for which the PDEs reduce to ODEs. The goal of this problem is to produce analytical solutions based on the underlying PDEs that are rapid to compute (e.g. require solving only a small number of algebraic equations rather than systems of PDEs).

Guided by asymptotic analysis, a fast numerical procedure has been developed to obtain approximate solutions of the governing PDEs governing MOSFET properties, namely electron density, Fermi potential and electrostatic potential. The approach depends on the channel’s being long enough, and appears accurate in this limit.

We have also described a heuristic approach, and explored several variants of the algorithm. We found a solution that seems to perform well with reasonable computation time. The heuristic is able to find solutions that respect the degree constraints, but show a small number of violations of the desired time constraints.

Tests on small problems show that heuristic is not always able to find feasible solutions, even though the exact method has shown they exist. It would be interesting in the future to look at whether insights gained by looking at exact solutions can be used to improve the heuristic.

For the case of longitudinal roughness, we derived a one-dimensional lubrication-type equation for the leading behavior of the pressure in the direction parallel to the velocity of the disk. The coefficients of the equation are determined by solving linear elliptic equations on a domain bounded by the gap height in the vertical direction and the period of the roughness in the span-wise direction.

For the case of transverse roughness the unsteady lubrication equations were reduced, following a multiple scale homogenization analysis, to a steady equation for the leading behavior of the pressure in the gap. The reduced equation involves certain averages of the gap height, but retains the same form of the usual steady, compressible lubrication equations.

Numerical calculations were performed for both cases, and the solution for the case of transverse roughness was shown be in excellent agreement with a corresponding numerical calculation of the original unsteady equations.

In the workshop we examined models for information flow on networks that considered trade-offs between the overall network utility (or flow rate) and path diversity to ensure balanced usage of all parts of the network (and to ensure stability and robustness against local disruptions in parts of the network).

While the linear programming solution of the basic max flow problem cannot handle the current problem, the approaches primal/dual formulation for describing the constrained optimization problem can be applied to the current generation of problems, called network utility maximization (NUM) problems. In particular, primal/dual formulations have been used extensively in studies of such networks.

A key feature of the traffic-routing model we are considering is its formulation as an economic system, governed by principles of supply and demand. Considering channel capacities as a commodity of limited supply, we might suspect that a system that regulates traffic via a pricing scheme would assign prices to channels in a manner inversely proportional to their respective capacities.

Once an appropriate network optimization problem has been formulated, it remains to solve the optimization problem; this will need to be done numerically, but the process can greatly benefit from simplifications and reductions that follow from analysis of the problem. Ideally the form of the numerical solution scheme can give insight on the design of a distributed algorithm for a Transmission Control Protocol (TCP) that can be directly implemented on the network.

At the workshop we considered the optimization problems for two small prototype network topologies: the two-link network and the diamond network. These examples are small enough to be tractable during the workshop, but retain some of the key features relevant to larger networks (competing routes with different capacities from the source to the destination, and routes with overlapping channels, respectively). We have studied a gradient descent method for solving obtaining the optimal solution via the dual problem. The numerical method was implemented in MATLAB and further analysis of the dual problem and properties of the gradient method were carried out. Another thrust of the group's work was in direct simulations of information flow in these small networks via Monte Carlo simulations as a means of directly testing the efficiencies of various allocation strategies.

In the present paper, we address two problems concerning the design and performance of an SWSN: optimal sensor placement and algorithms for object detection in the presence of false alarms. For both problems, we propose explicit decision rules and efficient algorithmic solutions. Further, we provide several numerical examples and present a simulation model that combines our placement and detection methods.

2. How to get desired transmitted and reflected light intensity profiles or distributions bu choosing an appropriate distribution of refractive indices along a fibre.

3. Investigating the use of measured light, for instance its polarization, being re-emitted from the end of a fibre through which a pulse of light had originally been input, towards finding fibre properties.

We examined various mechanisms that might lead to the deflection of the ink drop. In particular, we focused on whether the liquid filament that connects the lead drop to the nozzle is capable of supporting lateral waves which might propagate from the nozzle toward the lead drop and break the symmetry at pinch-off.

The Study Group found clusters in 'signal space,' that is, handsets reporting similar signal strengths with the same base stations and explored methods of locating these clusters geographically.

Study Group suggested to use the approach of bipartite networks to construct a similarity matrix that would allow the recommendation scores for different products to be computed. Given a current basket and a customer ID, this approach gives recommendation scores for each available item and recommends the item with the highest score that is not already in the basket. The similarity matrix can be computed offline, while recommendation score calculations can be performed live. This report contains the summary of Study Group findings together with the insights into properties of the similarity matrix and other related issues, such as recommendation for the data collection.

We examine how choices for the groove pattern can influence the key properties of the bearing. The focus is to understand the effect of the groove geometry on the pumping action. In particular the undesirable behavior caused by the low pressures created near the top/bottom ends of the bearing which, under many conditions, may result in the pressure becoming negative, relative to atmospheric pressure. Negative pressure can result in cavitation or, when it occurs near an air-oil interface, can cause air to be ingested and hence create bubbles. Any bubbles in the oil can corrupt the lubricating layer in the bearing and, as they are created and collapse, can cause significant undesirable vibrations. The negative pressures have therefore been identified as one of the key problems in design of hard disk drive bearings.

We will use numerical computations and some analysis to show that by modifying the groove geometry we can reduce the negative pressure while retaining good stability characteristics.

At the workshop, the Barker code group split into four non-disjoint subgroups:

- An "algebra group", who explored symmetries of the search space that preserve the autocorrelations' magnitude.

- A "computing group", who explored methods for quickly finding binary codes with very good autocorrelation properties.

- A "statistics group", who explored ways to quantify what has been empirically observed about autocorrelation in the search space S_2^N.

- A "continuous group", who explored a non-discrete analogue of the problem of finding sequences with good autocorrelations.

We focussed on making contributions in two main areas, namely:

1. Identification of sensitive cells in a table,

2. Maximizing data utility and minimising information loss - ensuring the table provides useful information.

One of the critical requirements for reliable inundation modelling is an accurate model of the earth's surface that extends from the open ocean through the inter-tidal zone into the onshore areas to be studied. Production of a sufficiently accurate elevation model is a complex and difficult process made more difficult because the available elevation data inevitably will come from a number of different sources and will have a range of vintages, resolutions and reliability.

There are two questions that arise when data is requested. The first deals with the true variability of the topography. Obviously, a flat surface needn’t be sampled nearly as finely as a highly convoluted surface. The second question relates to sensitivity; how are error bars derived for the impact results if the error bars on each elevation point is known? ANUGA solves the 2D nonlinear shallow water wave equations using a finite volume method and typical models can take days of computational time, so proper sensitivity analyses are often prohibitively expensive in terms of computational resources.

The main aim of this project was therefore to understand the uncertainties in the outputs of the inundation model based on possible uncertainty in the input data.

The important first step was to understand what aspect of the imaging problem we were being asked to study. However, since we would not be working directly with raw seismic data, traditional seismic techniques would not be required. Rather, we would be working with a two dimensional image, either a migrated image, a common mid-point (CMP) stack, or a common depth point (CDP) stack. In all cases, the images display the subsurface of the earth with geological structures evident in various layers.

For a given image the local spectrum is computed at each point. The various peaks in the spectrum are used to classify each pixel in the original seismic image resulting in an enhanced and hopefully more useful seismic pseudosection. Thus, the objective of this project was to improve the identification of layers and other geological structures apparent in the two dimensional image (a seismic section, or CDP gather) by classifying and coloring image pixels into groups based on their local spectral attributes.

More recently, researchers have started using a statistical, rather than deterministic, approach to the determination of the earth’s subsurface and its reservoir parameters. That is one of the main approaches that we will use here. Thus, instead of assuming an underlying model, we will use multivariate statistical techniques to determine the earth’s subsurface, using a set of derived “attributes” from the seismic data.

Reverse engineering of integrated circuit is made difficult by the shrinking form factor and increasing transistor density. To perform this complex task Orisar Inc. employs sophisticated techniques to capture the design of an IC. Electron microscope photography captures a detailed image of an IC layer. Because a typical IC contains more than one layer, each layer is photographed and physically removed from the IC to expose the next layer. A noise removal algorithm is then applied to the pictures, which are then passed to pattern recognition software in order to transfer the layer design into a polygonal

representation of the circuit. At the last step a knowledgeable human expert looks at the polygonal representation and inputs the design into a standard electronic schematic with a CAD package.

This process us currently very time consuming. We propose a method which can be easily automated thereby saving valuable worker time and accelerating the process of reverse engineering.

To test the algorithms two case studies were undertaken. The first study involved differentiating BMW and Honda car owners. The algorithms developed were reasonably successful at both finding questions that differentiate these two populations and identifying common characteristics amongst the groups of respondents. For the second case study it was hoped that the same algorithms could differentiate between consumers of two brands of beer. In this case the first algorithm was not as successful as differentiating between all groups; it showed some distinctions between beer drinkers and non-beer drinkers, but not as clearly defined as in the first case study. The second algorithm was then used successfully to further identify spending patterns once this distinction was made. In this second case study a deeper factor analysis could be used to identify a combination of factors which could be used in the first algorithm.

We quantify the quality of service towards the clients of this facility based on a service level agreement between the two parts: the web hosting provider and the client. We assume that the client has the knowledge and resources to quantify its needs. Based on these quantifications, which in our model become parameters, the provider can establish a service offer. In our model, this offer covers the quality of service and the price options for it.

In this paper, we begin by using algorithmic graph theory to study optimal patterns for adding IC images one at a time to a grid. In the remaining sections we study ways of stitching all the images simultaneously using different optimisation approaches: least squares methods, simulated annealing, and nonlinear programming.

PDS proposed by Random Knowledge Inc., detects and localizes traffic patterns consistent with attacks hidden within large amounts of legitimate traffic. With the network’s packet traffic stream being its input, PDS relies on high fidelity models for normal traffic from which it can critically judge the legitimacy of any substream of packet traffic. Because of the reliability on an accurate baseline model for normal network traffic, in this workshop, we concentrate on modelling normal network traffic with a Poisson process.

The essence of the problem therefore is: given a set of traces that are expected to be representative of common use, we must rearrange the files on the disk so that the performance is optimized.

Programs called disk defragmenters use these simple principles to rearrange data records on a disk so that each file is contiguous, with no holes or few holes between data records. Some more sophisticated disk defragmenters also try to place related files near each other, usually based on simple static structure rather than a dynamic analysis of the accesses. We are interested in more dynamic defragmentation procedures.

We first consider a 1D model of the disk. We then look at the results from an investigation of the 2D disk model followed by a discussion of caching strategies. Finally we list some of the complications that may need to be addressed in order to make the models more realistic.

T. D. Parsons, who was then at Pennsylvania State University, was approached in 1977 by some local spelunkers who asked his aid in optimizing a search for someone lost in a cave in Pennsylvania. Parsons quickly formulated the problem as a search problem in a graph. Subsequent papers led to two divergent problems. One problem dealt with searching under assumptions of fairly extensive information, while the other problem dealt with searching under assumptions of essentially zero information. These two topics are developed in the next two sections.

The noise introduced by the discrete approximation to backwards diffusion forces

the intensity away from uniform values, so that rounding each pixel to black or

white can produce a pleasing halftone. We formulate our method by considering

the Human Visual System norm and approximating the inverse of the blurring

operator. We also investigate several possible mobility functions for use in a

nonlinear backward diffusion equation for higher quality results.

The wireless transmission channel alternates between good and bad states over time. While the channel is in a bad state the transmission of a packet fails and the packet needs to be retransmitted. When the state of the channel is good, packets are transmitted successfully and do not require retransmission. The system transmission efficiency or throughput might be defined as the number of data packets that can be transmitted successfully in a given time.

The scheduler we wish to study is designed to consider both the wireless channel conditions and the users' quality of service requirements. For this purpose users are assigned a credit every scheduling frame, which is a function of the wireless channel conditions and the quality of service required, which depends on the traffic class.

* the transition matrix is large, but sparse.

* the systems of linear equations to be solved are generally singular and need some additional normalisation condition, such as is provided by using the fundamental matrix.

We also note a third highly important property regarding applications of numerical linear algebra:

* the transition matrix is asymmetric.

A realistic dimension for the matrix in the Bianchi model described below is 8064×8064, but on average there are only a few nonzero entries per column. Merely storing such a large matrix in dense form would require nearly 0.5GBytes using 64-bit floating point numbers, and computing its LU factorisation takes around 80 seconds on a modern microprocessor. It is thus highly desirable to employ specialised algorithms for sparse matrices. These algorithms are generally divided between those only applicable to symmetric matrices, the most prominent being the conjugate-gradient (CG) algorithm for solving linear equations, and those applicable to general matrices. A similar division is present in the literature on numerical eigenvalue problems.

It is now possible to formulate the Quadratic Assignment Problem that remains after clustering the original problem to one with 100 up to 1000 macros. There exists a lot of literature on finding the global minimum of the costs, but nowadays computational possibilities are still too restrictive to find an optimal solution within a reasonable amount of time and computational memory. however, we believe it is possible to find a solution that leads to a acceptable local minimum of the costs.

The problem posed to the Study Group was to develop a dynamic reassignment algorithm for implementing a new frequency plan so that there is little or no disruption of the network's performance during the transition. This problem was naturally formulated in terms of graph colouring and an effective algorithm was developed based on a straightforward approach of search and random colouring.