<link rel='alternate' type='application/rss+xml' title='RSS' href='index.xml' />
Background: #fff
Foreground: #000
PrimaryPale: #8cf
PrimaryLight: #18f
PrimaryMid: #04b
PrimaryDark: #014
SecondaryPale: #ffc
SecondaryLight: #fe8
SecondaryMid: #db4
SecondaryDark: #841
TertiaryPale: #eee
TertiaryLight: #ccc
TertiaryMid: #999
TertiaryDark: #666
Error: #f88
body {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}

a {color:[[ColorPalette::PrimaryMid]];}
a:hover {background-color:[[ColorPalette::PrimaryMid]]; color:[[ColorPalette::Background]];}
a img {border:0;}

h1,h2,h3,h4,h5,h6 {color:[[ColorPalette::SecondaryDark]]; background:transparent;}
h1 {border-bottom:2px solid [[ColorPalette::TertiaryLight]];}
h2,h3 {border-bottom:1px solid [[ColorPalette::TertiaryLight]];}

.button {color:[[ColorPalette::PrimaryDark]]; border:1px solid [[ColorPalette::Background]];}
.button:hover {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::SecondaryLight]]; border-color:[[ColorPalette::SecondaryMid]];}
.button:active {color:[[ColorPalette::Background]]; background:[[ColorPalette::SecondaryMid]]; border:1px solid [[ColorPalette::SecondaryDark]];}

.header {background:[[ColorPalette::PrimaryMid]];}
.headerShadow {color:[[ColorPalette::Foreground]];}
.headerShadow a {font-weight:normal; color:[[ColorPalette::Foreground]];}
.headerForeground {color:[[ColorPalette::Background]];}
.headerForeground a {font-weight:normal; color:[[ColorPalette::PrimaryPale]];}

.tabSelected {color:[[ColorPalette::PrimaryDark]];
	border-left:1px solid [[ColorPalette::TertiaryLight]];
	border-top:1px solid [[ColorPalette::TertiaryLight]];
	border-right:1px solid [[ColorPalette::TertiaryLight]];
.tabUnselected {color:[[ColorPalette::Background]]; background:[[ColorPalette::TertiaryMid]];}
.tabContents {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::TertiaryPale]]; border:1px solid [[ColorPalette::TertiaryLight]];}
.tabContents .button {border:0;}

#sidebar {}
#sidebarOptions input {border:1px solid [[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel {background:[[ColorPalette::PrimaryPale]];}
#sidebarOptions .sliderPanel a {border:none;color:[[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel a:hover {color:[[ColorPalette::Background]]; background:[[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel a:active {color:[[ColorPalette::PrimaryMid]]; background:[[ColorPalette::Background]];}

.wizard {background:[[ColorPalette::PrimaryPale]]; border:1px solid [[ColorPalette::PrimaryMid]];}
.wizard h1 {color:[[ColorPalette::PrimaryDark]]; border:none;}
.wizard h2 {color:[[ColorPalette::Foreground]]; border:none;}
.wizardStep {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];
	border:1px solid [[ColorPalette::PrimaryMid]];}
.wizardStep.wizardStepDone {background:[[ColorPalette::TertiaryLight]];}
.wizardFooter {background:[[ColorPalette::PrimaryPale]];}
.wizardFooter .status {background:[[ColorPalette::PrimaryDark]]; color:[[ColorPalette::Background]];}
.wizard .button {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::SecondaryLight]]; border: 1px solid;
	border-color:[[ColorPalette::SecondaryPale]] [[ColorPalette::SecondaryDark]] [[ColorPalette::SecondaryDark]] [[ColorPalette::SecondaryPale]];}
.wizard .button:hover {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::Background]];}
.wizard .button:active {color:[[ColorPalette::Background]]; background:[[ColorPalette::Foreground]]; border: 1px solid;
	border-color:[[ColorPalette::PrimaryDark]] [[ColorPalette::PrimaryPale]] [[ColorPalette::PrimaryPale]] [[ColorPalette::PrimaryDark]];}

.wizard .notChanged {background:transparent;}
.wizard .changedLocally {background:#80ff80;}
.wizard .changedServer {background:#8080ff;}
.wizard .changedBoth {background:#ff8080;}
.wizard .notFound {background:#ffff80;}
.wizard .putToServer {background:#ff80ff;}
.wizard .gotFromServer {background:#80ffff;}

#messageArea {border:1px solid [[ColorPalette::SecondaryMid]]; background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]];}
#messageArea .button {color:[[ColorPalette::PrimaryMid]]; background:[[ColorPalette::SecondaryPale]]; border:none;}

.popupTiddler {background:[[ColorPalette::TertiaryPale]]; border:2px solid [[ColorPalette::TertiaryMid]];}

.popup {background:[[ColorPalette::TertiaryPale]]; color:[[ColorPalette::TertiaryDark]]; border-left:1px solid [[ColorPalette::TertiaryMid]]; border-top:1px solid [[ColorPalette::TertiaryMid]]; border-right:2px solid [[ColorPalette::TertiaryDark]]; border-bottom:2px solid [[ColorPalette::TertiaryDark]];}
.popup hr {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::PrimaryDark]]; border-bottom:1px;}
.popup li.disabled {color:[[ColorPalette::TertiaryMid]];}
.popup li a, .popup li a:visited {color:[[ColorPalette::Foreground]]; border: none;}
.popup li a:hover {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; border: none;}
.popup li a:active {background:[[ColorPalette::SecondaryPale]]; color:[[ColorPalette::Foreground]]; border: none;}
.popupHighlight {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}
.listBreak div {border-bottom:1px solid [[ColorPalette::TertiaryDark]];}

.tiddler .defaultCommand {font-weight:bold;}

.shadow .title {color:[[ColorPalette::TertiaryDark]];}

.title {color:[[ColorPalette::SecondaryDark]];}
.subtitle {color:[[ColorPalette::TertiaryDark]];}

.toolbar {color:[[ColorPalette::PrimaryMid]];}
.toolbar a {color:[[ColorPalette::TertiaryLight]];}
.selected .toolbar a {color:[[ColorPalette::TertiaryMid]];}
.selected .toolbar a:hover {color:[[ColorPalette::Foreground]];}

.tagging, .tagged {border:1px solid [[ColorPalette::TertiaryPale]]; background-color:[[ColorPalette::TertiaryPale]];}
.selected .tagging, .selected .tagged {background-color:[[ColorPalette::TertiaryLight]]; border:1px solid [[ColorPalette::TertiaryMid]];}
.tagging .listTitle, .tagged .listTitle {color:[[ColorPalette::PrimaryDark]];}
.tagging .button, .tagged .button {border:none;}

.footer {color:[[ColorPalette::TertiaryLight]];}
.selected .footer {color:[[ColorPalette::TertiaryMid]];}

.error, .errorButton {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::Error]];}
.warning {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::SecondaryPale]];}
.lowlight {background:[[ColorPalette::TertiaryLight]];}

.zoomer {background:none; color:[[ColorPalette::TertiaryMid]]; border:3px solid [[ColorPalette::TertiaryMid]];}

.imageLink, #displayArea .imageLink {background:transparent;}

.annotation {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; border:2px solid [[ColorPalette::SecondaryMid]];}

.viewer .listTitle {list-style-type:none; margin-left:-2em;}
.viewer .button {border:1px solid [[ColorPalette::SecondaryMid]];}
.viewer blockquote {border-left:3px solid [[ColorPalette::TertiaryDark]];}

.viewer table, table.twtable {border:2px solid [[ColorPalette::TertiaryDark]];}
.viewer th, .viewer thead td, .twtable th, .twtable thead td {background:[[ColorPalette::SecondaryMid]]; border:1px solid [[ColorPalette::TertiaryDark]]; color:[[ColorPalette::Background]];}
.viewer td, .viewer tr, .twtable td, .twtable tr {border:1px solid [[ColorPalette::TertiaryDark]];}

.viewer pre {border:1px solid [[ColorPalette::SecondaryLight]]; background:[[ColorPalette::SecondaryPale]];}
.viewer code {color:[[ColorPalette::SecondaryDark]];}
.viewer hr {border:0; border-top:dashed 1px [[ColorPalette::TertiaryDark]]; color:[[ColorPalette::TertiaryDark]];}

.highlight, .marked {background:[[ColorPalette::SecondaryLight]];}

.editor input {border:1px solid [[ColorPalette::PrimaryMid]];}
.editor textarea {border:1px solid [[ColorPalette::PrimaryMid]]; width:100%;}
.editorFooter {color:[[ColorPalette::TertiaryMid]];}
.readOnly {background:[[ColorPalette::TertiaryPale]];}

#backstageArea {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::TertiaryMid]];}
#backstageArea a {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::Background]]; border:none;}
#backstageArea a:hover {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; }
#backstageArea a.backstageSelTab {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}
#backstageButton a {background:none; color:[[ColorPalette::Background]]; border:none;}
#backstageButton a:hover {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::Background]]; border:none;}
#backstagePanel {background:[[ColorPalette::Background]]; border-color: [[ColorPalette::Background]] [[ColorPalette::TertiaryDark]] [[ColorPalette::TertiaryDark]] [[ColorPalette::TertiaryDark]];}
.backstagePanelFooter .button {border:none; color:[[ColorPalette::Background]];}
.backstagePanelFooter .button:hover {color:[[ColorPalette::Foreground]];}
#backstageCloak {background:[[ColorPalette::Foreground]]; opacity:0.6; filter:alpha(opacity=60);}
* html .tiddler {height:1%;}

body {font-size:.75em; font-family:arial,helvetica; margin:0; padding:0;}

h1,h2,h3,h4,h5,h6 {font-weight:bold; text-decoration:none;}
h1,h2,h3 {padding-bottom:1px; margin-top:1.2em;margin-bottom:0.3em;}
h4,h5,h6 {margin-top:1em;}
h1 {font-size:1.35em;}
h2 {font-size:1.25em;}
h3 {font-size:1.1em;}
h4 {font-size:1em;}
h5 {font-size:.9em;}

hr {height:1px;}

a {text-decoration:none;}

dt {font-weight:bold;}

ol {list-style-type:decimal;}
ol ol {list-style-type:lower-alpha;}
ol ol ol {list-style-type:lower-roman;}
ol ol ol ol {list-style-type:decimal;}
ol ol ol ol ol {list-style-type:lower-alpha;}
ol ol ol ol ol ol {list-style-type:lower-roman;}
ol ol ol ol ol ol ol {list-style-type:decimal;}

.txtOptionInput {width:11em;}

#contentWrapper .chkOptionInput {border:0;}

.externalLink {text-decoration:underline;}

.indent {margin-left:3em;}
.outdent {margin-left:3em; text-indent:-3em;}
code.escaped {white-space:nowrap;}

.tiddlyLinkExisting {font-weight:bold;}
.tiddlyLinkNonExisting {font-style:italic;}

/* the 'a' is required for IE, otherwise it renders the whole tiddler in bold */
a.tiddlyLinkNonExisting.shadow {font-weight:bold;}

#mainMenu .tiddlyLinkExisting,
	#mainMenu .tiddlyLinkNonExisting,
	#sidebarTabs .tiddlyLinkNonExisting {font-weight:normal; font-style:normal;}
#sidebarTabs .tiddlyLinkExisting {font-weight:bold; font-style:normal;}

.header {position:relative;}
.header a:hover {background:transparent;}
.headerShadow {position:relative; padding:4.5em 0 1em 1em; left:-1px; top:-1px;}
.headerForeground {position:absolute; padding:4.5em 0 1em 1em; left:0; top:0;}

.siteTitle {font-size:3em;}
.siteSubtitle {font-size:1.2em;}

#mainMenu {position:absolute; left:0; width:10em; text-align:right; line-height:1.6em; padding:1.5em 0.5em 0.5em 0.5em; font-size:1.1em;}

#sidebar {position:absolute; right:3px; width:16em; font-size:.9em;}
#sidebarOptions {padding-top:0.3em;}
#sidebarOptions a {margin:0 0.2em; padding:0.2em 0.3em; display:block;}
#sidebarOptions input {margin:0.4em 0.5em;}
#sidebarOptions .sliderPanel {margin-left:1em; padding:0.5em; font-size:.85em;}
#sidebarOptions .sliderPanel a {font-weight:bold; display:inline; padding:0;}
#sidebarOptions .sliderPanel input {margin:0 0 0.3em 0;}
#sidebarTabs .tabContents {width:15em; overflow:hidden;}

.wizard {padding:0.1em 1em 0 2em;}
.wizard h1 {font-size:2em; font-weight:bold; background:none; padding:0; margin:0.4em 0 0.2em;}
.wizard h2 {font-size:1.2em; font-weight:bold; background:none; padding:0; margin:0.4em 0 0.2em;}
.wizardStep {padding:1em 1em 1em 1em;}
.wizard .button {margin:0.5em 0 0; font-size:1.2em;}
.wizardFooter {padding:0.8em 0.4em 0.8em 0;}
.wizardFooter .status {padding:0 0.4em; margin-left:1em;}
.wizard .button {padding:0.1em 0.2em;}

#messageArea {position:fixed; top:2em; right:0; margin:0.5em; padding:0.5em; z-index:2000; _position:absolute;}
.messageToolbar {display:block; text-align:right; padding:0.2em;}
#messageArea a {text-decoration:underline;}

.tiddlerPopupButton {padding:0.2em;}
.popupTiddler {position: absolute; z-index:300; padding:1em; margin:0;}

.popup {position:absolute; z-index:300; font-size:.9em; padding:0; list-style:none; margin:0;}
.popup .popupMessage {padding:0.4em;}
.popup hr {display:block; height:1px; width:auto; padding:0; margin:0.2em 0;}
.popup li.disabled {padding:0.4em;}
.popup li a {display:block; padding:0.4em; font-weight:normal; cursor:pointer;}
.listBreak {font-size:1px; line-height:1px;}
.listBreak div {margin:2px 0;}

.tabset {padding:1em 0 0 0.5em;}
.tab {margin:0 0 0 0.25em; padding:2px;}
.tabContents {padding:0.5em;}
.tabContents ul, .tabContents ol {margin:0; padding:0;}
.txtMainTab .tabContents li {list-style:none;}
.tabContents li.listLink { margin-left:.75em;}

#contentWrapper {display:block;}
#splashScreen {display:none;}

#displayArea {margin:1em 17em 0 14em;}

.toolbar {text-align:right; font-size:.9em;}

.tiddler {padding:1em 1em 0;}

.missing .viewer,.missing .title {font-style:italic;}

.title {font-size:1.6em; font-weight:bold;}

.missing .subtitle {display:none;}
.subtitle {font-size:1.1em;}

.tiddler .button {padding:0.2em 0.4em;}

.tagging {margin:0.5em 0.5em 0.5em 0; float:left; display:none;}
.isTag .tagging {display:block;}
.tagged {margin:0.5em; float:right;}
.tagging, .tagged {font-size:0.9em; padding:0.25em;}
.tagging ul, .tagged ul {list-style:none; margin:0.25em; padding:0;}
.tagClear {clear:both;}

.footer {font-size:.9em;}
.footer li {display:inline;}

.annotation {padding:0.5em; margin:0.5em;}

* html .viewer pre {width:99%; padding:0 0 1em 0;}
.viewer {line-height:1.4em; padding-top:0.5em;}
.viewer .button {margin:0 0.25em; padding:0 0.25em;}
.viewer blockquote {line-height:1.5em; padding-left:0.8em;margin-left:2.5em;}
.viewer ul, .viewer ol {margin-left:0.5em; padding-left:1.5em;}

.viewer table, table.twtable {border-collapse:collapse; margin:0.8em 1.0em;}
.viewer th, .viewer td, .viewer tr,.viewer caption,.twtable th, .twtable td, .twtable tr,.twtable caption {padding:3px;}
table.listView {font-size:0.85em; margin:0.8em 1.0em;}
table.listView th, table.listView td, table.listView tr {padding:0 3px 0 3px;}

.viewer pre {padding:0.5em; margin-left:0.5em; font-size:1.2em; line-height:1.4em; overflow:auto;}
.viewer code {font-size:1.2em; line-height:1.4em;}

.editor {font-size:1.1em;}
.editor input, .editor textarea {display:block; width:100%; font:inherit;}
.editorFooter {padding:0.25em 0; font-size:.9em;}
.editorFooter .button {padding-top:0; padding-bottom:0;}

.fieldsetFix {border:0; padding:0; margin:1px 0px;}

.zoomer {font-size:1.1em; position:absolute; overflow:hidden;}
.zoomer div {padding:1em;}

* html #backstage {width:99%;}
* html #backstageArea {width:99%;}
#backstageArea {display:none; position:relative; overflow: hidden; z-index:150; padding:0.3em 0.5em;}
#backstageToolbar {position:relative;}
#backstageArea a {font-weight:bold; margin-left:0.5em; padding:0.3em 0.5em;}
#backstageButton {display:none; position:absolute; z-index:175; top:0; right:0;}
#backstageButton a {padding:0.1em 0.4em; margin:0.1em;}
#backstage {position:relative; width:100%; z-index:50;}
#backstagePanel {display:none; z-index:100; position:absolute; width:90%; margin-left:3em; padding:1em;}
.backstagePanelFooter {padding-top:0.2em; float:right;}
.backstagePanelFooter a {padding:0.2em 0.4em;}
#backstageCloak {display:none; z-index:20; position:absolute; width:100%; height:100px;}

.whenBackstage {display:none;}
.backstageVisible .whenBackstage {display:block;}
StyleSheet for use when a translation requires any css style changes.
This StyleSheet can be used directly by languages such as Chinese, Japanese and Korean which need larger font sizes.
body {font-size:0.8em;}
#sidebarOptions {font-size:1.05em;}
#sidebarOptions a {font-style:normal;}
#sidebarOptions .sliderPanel {font-size:0.95em;}
.subtitle {font-size:0.8em;}
.viewer table.listView {font-size:0.95em;}
@media print {
#mainMenu, #sidebar, #messageArea, .toolbar, #backstageButton, #backstageArea {display: none !important;}
#displayArea {margin: 1em 1em 0em;}
noscript {display:none;} /* Fixes a feature in Firefox where print preview displays the noscript content */
<div class='header' macro='gradient vert [[ColorPalette::PrimaryLight]] [[ColorPalette::PrimaryMid]]'>
<div class='headerShadow'>
<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;
<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
<div class='headerForeground'>
<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;
<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
<div id='mainMenu' refresh='content' tiddler='MainMenu'></div>
<div id='sidebar'>
<div id='sidebarOptions' refresh='content' tiddler='SideBarOptions'></div>
<div id='sidebarTabs' refresh='content' force='true' tiddler='SideBarTabs'></div>
<div id='displayArea'>
<div id='messageArea'></div>
<div id='tiddlerDisplay'></div>
<div class='toolbar' macro='toolbar [[ToolbarCommands::ViewToolbar]]'></div>
<div class='title' macro='view title'></div>
<div class='subtitle'><span macro='view modifier link'></span>, <span macro='view modified date'></span> (<span macro='message views.wikified.createdPrompt'></span> <span macro='view created date'></span>)</div>
<div class='tagging' macro='tagging'></div>
<div class='tagged' macro='tags'></div>
<div class='viewer' macro='view text wikified'></div>
<div class='tagClear'></div>
<div class='toolbar' macro='toolbar [[ToolbarCommands::EditToolbar]]'></div>
<div class='title' macro='view title'></div>
<div class='editor' macro='edit title'></div>
<div macro='annotations'></div>
<div class='editor' macro='edit text'></div>
<div class='editor' macro='edit tags'></div><div class='editorFooter'><span macro='message views.editor.tagPrompt'></span><span macro='tagChooser excludeLists'></span></div>
To get started with this blank [[TiddlyWiki]], you'll need to modify the following tiddlers:
* [[SiteTitle]] & [[SiteSubtitle]]: The title and subtitle of the site, as shown above (after saving, they will also appear in the browser title bar)
* [[MainMenu]]: The menu (usually on the left)
* [[DefaultTiddlers]]: Contains the names of the tiddlers that you want to appear when the TiddlyWiki is opened
You'll also need to enter your username for signing your edits: <<option txtUserName>>
These [[InterfaceOptions]] for customising [[TiddlyWiki]] are saved in your browser

Your username for signing your edits. Write it as a [[WikiWord]] (eg [[JoeBloggs]])

<<option txtUserName>>
<<option chkSaveBackups>> [[SaveBackups]]
<<option chkAutoSave>> [[AutoSave]]
<<option chkRegExpSearch>> [[RegExpSearch]]
<<option chkCaseSensitiveSearch>> [[CaseSensitiveSearch]]
<<option chkAnimate>> [[EnableAnimations]]

Also see [[AdvancedOptions]]

This tiddler was automatically created to record the details of this server

This tiddler was automatically created to record the details of this server
Type the text for '2009'
We all sit in ‘Office Swivel’ chairs, an ‘intuitive’ solution has been developed to make the experience more comfortable (using the occupant’s mass rather than springs) but its effectiveness is difficult to quantify. Can a model be developed to consider what ‘Human’ percentile will receive the same effect as they recline and return to neutral rest; is the current geometric ‘setup’ a true reflection of the forces in play, or could this geometry be altered to achieve a more efficient result?
The problem AOT presents is that of ADR-option pricing. In the brief outline below the problem and its setting is described. In the final presentation of the problem a brief introduction to options and option pricing will be given, such that the jargon will be clear.

As a stock and derivative trading firm AOT is active on several exchanges over the world. One of the strategies trading is based on, is the so-called ADR trading. ADR is an acronym for American Depository Receipt, and is a tradable on a US exchange that is based on a foreign stock. Consider for example the stock ABN-AMRO, traded at Euronext Amsterdam. A U.S. bank bought a bunch of these stocks, put these into its safe and printed some U.S. notes that can be exchanged for the stocks. These notes are traded on the NYSE and called ABN-AMRO-ADRs. As these ADRS are equivalent with the ABN-AMRO stock, their price should be the price of the Euronext listing converted into Dollars.

!Options on ADRs
As the ADRs are traded assets on an exchange, they can be used to construct derivatives. On a lot of ADR-counterparts of Dutch Euronext stocks options can be written or bought. ADR-Option trading consists of trading these U.S. options against the Dutch ones, and the hedging should  - where possible - be done with Dutch stocks (this reduces trading fees).

!Pricing Problem
Before trading these options, we need a price for them. Pricing the U.S. options comes at this moment down to correlation estimation between the Dutch stock and the Euro/Dollar exchange rate, where we assume that both processes can be modeled by the standard GBM with correlated Brownian Motions. In the past this did not work well at all. Correlations are not stable, but heavily changing and even when they were stable the P/L did not match the expected P/L. Furthermore it is not clear at all how the correlation will translate Dutch skew into U.S. skew.




`x/x={(1,if x!=0),(text{undefined},if x=0):}`

`hat(ab) bar(xy) ulA vec v dotx ddot y`

|''Description:''|This plugin can be used to make the [[ASCIIMathML|http://www1.chapman.edu/~jipsen/mathml/asciimath.html]] script (version 2.1) available from TiddlyWiki. That script translates ASCII or LaTeX math notations to MathML and also provides an easy way to produce mathematical SVG graphics.|
|''Author:''|Paulo Soares with Peter Jipsen's collaboration|
|''License:''|[[Creative Commons Attribution-Share Alike 3.0 License|http://creativecommons.org/licenses/by-sa/3.0/]]|
config.formatterHelpers.ASCIIMathMLHelper = function(w) {
 this.lookaheadRegExp.lastIndex = w.matchStart;
 var lookaheadMatch = this.lookaheadRegExp.exec(w.source);
  var eq = parseMath(lookaheadMatch[1],this.latex);
    var node = createTiddlyElement(w.output,"div");
  } else {w.output.appendChild(eq);}
  w.nextMatch = lookaheadMatch.index + lookaheadMatch[0].length;

config.formatters.push( {
 name: "clatexmath",
 match: "\\$\\$",
 lookaheadRegExp: /\$\$((?:.|\n)*?)\$\$/mg,
 latex: true,
 displaystyle: true,
 handler: config.formatterHelpers.ASCIIMathMLHelper

config.formatters.push( {
 name: "latexmath",
 match: "\\$",
 lookaheadRegExp: /\$((?:.|\n)*?)\$/mg,
 latex: true,
 displaystyle: false,
 handler: config.formatterHelpers.ASCIIMathMLHelper

config.formatters.push( {
 name: "casciimath",
 match: "``",
 lookaheadRegExp: /``((?:.|\n)*?)``/mg,
 latex: false,
 displaystyle: true,
 handler: config.formatterHelpers.ASCIIMathMLHelper

config.formatters.push( {
 name: "asciimath",
 match: "`",
 lookaheadRegExp: /`((?:.|\n)*?)`/mg,
 latex: false,
 displaystyle: false,
 handler: config.formatterHelpers.ASCIIMathMLHelper

config.formatters.push( {
 name: "automath",
 match: "amath",
 lookaheadRegExp: /amath[\s]+((?:.|\n)*?)[\s]+endamath/mg,
 handler: function(w){
 this.lookaheadRegExp.lastIndex = w.matchStart;
 var lookaheadMatch = this.lookaheadRegExp.exec(w.source);
 var arr = lookaheadMatch[1].split("\n");
 var i, txt, frag, last, node, pos;
 for(i=0; i<arr.length; i++){
 txt = AMautomathrec(arr[i]);
 pos = 0;
 var first = txt.indexOf('`',pos);
  frag = txt.substring(pos,first);
  node = document.createTextNode(frag);
  var last = txt.indexOf('`',first);
  frag = txt.substring(first,last);
  first = txt.indexOf('`',pos);
  frag = txt.substr(last);
  node = document.createTextNode(frag);
 if(i+1<arr.length) w.output.appendChild(createTiddlyElement(w.output,"br"));
 w.nextMatch = lookaheadMatch.index + lookaheadMatch[0].length;

config.formatterHelpers.ASCIISVGHelper = function(w) {
 this.lookaheadRegExp.lastIndex = w.matchStart;
 var lookaheadMatch = this.lookaheadRegExp.exec(w.source);
 var eq = createTiddlyElement(w.output,this.element);
 var svg = createTiddlyElement(eq,"embed");
 w.nextMatch = lookaheadMatch.index + lookaheadMatch[0].length;

config.formatters.push( {
 name: "asciisvg-d",
 match: "agraph",
 lookaheadRegExp: /agraph((?:.|\n)*?)endagraph/mg,
 element: "div",
 handler: config.formatterHelpers.ASCIISVGHelper

config.formatters.push( {
 name: "asciisvg-s",
 match: "igraph",
 lookaheadRegExp: /igraph((?:.|\n)*?)endigraph/mg,
 element: "span",
 handler: config.formatterHelpers.ASCIISVGHelper

config.shadowTiddlers.ASCIIMathMLPluginDoc="The documentation is available [[here|http://www.math.ist.utl.pt/~psoares/addons.html#ASCIIMathMLPluginDoc]].";
Hotelzon International is a Helsinki-based technology company that develops a Hotel Reservation Technology for international Travel Agencies, Hotel Booking Companies, Airlines, Web portals and Corporations.

Adaptive Polling is a method aimed at adding value to the end users of the Hotelzon hotel reservation system by providing pro-active, prior-to-booking availability information. The core idea of the polling is to pro-actively process non-user initiated availability requests to the hotel chains’ Central Reservation Systems, retrieve individual properties’ availability status for requested period of time and store the information locally in the Hotelzon system. This information is then used to add value for the end users (compared to the competing systems) by indicating the (polled, i.e. latest known) availability status of all hotels in the given destination PRIOR to choosing one specific hotel and making an online availability request to that hotel.

The motive for Hotelzon with regards to further developing and optimising the polling algorithm is three-fold.
# further optimising the logic for determining which hotels are polled and how often - i.e. adaptibility
# further optimising the logic for determining the status of individual dates within a period of 2+ days
# any other new ideas, improvements and optimisation that can possibly be implemented, including even ‘artificial intelligence’ for the above points. 

In other words the goal for Hotelzon is to further improve and optimise the polling algorithm in terms of polling more hotels with less transactions with better quality, i.e. reliability of the results.

The polling engine is a unique feature on global scale. It has been innovated and developed at Hotelzon, and it has also been patended. 
Maintenance, Repair & Overhaul (MRO) companies are constantly being pressured by their costumers to improve turn-around-times, costs and increase on-time maintenance performance.

Each year, TAP M&E shops receive more than 25.000 components, from different aircraft systems (galley equipments, pneumatics, fuel, data recording, engines, and so on). About 20% of the activity is driven by planned removals, client schedules, AOG's (aircraft on ground) and storage availability. The remaining, 80%, is driven by the shop manager experience and intuition!

The calculation of the component demand rate can be obtained by the removal rate (MTBUR - Mean Time Between Unscheduled Removals) and storage availability. The shop capacity depends greatly on the technician skills, test equipment, parts availability and repair time.

The component repair process is not standardized, and the time required to complete a repair is highly variable. It depends on what the fault is and most of the repair philosophy runs on a test-fix-test process. Every unit that comes to the shop is different, the type of failure is unpredictable, and the experienced technician and the upper-end test equipment are unique.

On the production scheduling level, the team leader must relate the individual tasks (clean, repair, overhaul, test) or set of tasks to the technician's work list. This activity should assign the technicians and also the specific test equipments. Smart scheduling should keep the interdependencies into account, including sequence of tasks, availability of parts, resources and known delays. Tasks need to be dynamically reprioritized and re-allocated in response to changes in client schedules and resources availability.

Therefore, we face two main issues:
# How shall be defined the priority list (which units shall be tackled first)?
# How shall be distributed the prioritized components among technicians throughout all stations of the repair process?
National Air Traffic Services Ltd.

<html><h4><font face="arial,sans-serif,times">1. Background</font></h4>
<p><font face="arial,sans-serif,times">Sequencing at Heathrow is one of the most important
        examples of the combination of uncertainty and planning that arises in
        airport operation.  At Heathrow, arrival and departure sequencing are
        approximately independent, because one runway is used for each.  (However,
        at other airports with different patterns of use, there could well be
        considerable interaction between arrival and departure sequencing.)
        The procedure with Arrivals is that aircraft enter one of 4 stacks,
        and the Arrivals Sequencing Controller considers the lowest 1 or 2
        aircraft in each stack and chooses a landing sequence for those aircraft
        subject to Wake Vortex constraints.  This sequencing is done before control
        is passed to the Control Tower.  Computer-assisted sequencing for arrivals
        is being introduced.</font></p>

<p><font face="arial,sans-serif,times">However, departure sequencing has not yet been tackled.
        No one can give a complete mathematical formulation of the departure
        sequencing problem: it is likely that during development of a
        departure sequencing tool, new features of the problem will emerge.
        Nevertheless, NATS would like to use a simplified model to stimulate
        further thought, and as a basis for further refinement and development.
        In this simplified model, the sequence of events is</font></p>

<ul><font face="arial,sans-serif,times"><li><p>The public timetables list a departure time for each flight.</p></li>
<li><p>Based on those times, Eurocontrol issues a Calculated Take-Off
        Time (CTOT).  For instance if the timetabled departure time
        is 1000, the CTOT is 1020, which means the aircraft should
        take off between 1015 and 1030.  This interval is referred
        to as the CTOT <em>slot</em> for that aircraft.  Some aircraft do not
        have a CTOT slot.</p></li>
<li><p>When the aircraft is ready to leave its departure stand, the pilot
        radios the control tower requesting permission to push back.
        At present, this is the point at which an aircraft enters the
        sequencing system.  These requests are handled by the Ground
        Movement Planner.  When the tug is ready and it is clear for
        the aircraft to enter the taxiway, permission is granted.</p></li>
<li><p>Then (after sometimes a delay) the aircraft is pulled away from the
        stand by the tug, starts its engines and begins taxiing.
        Responsibility for directing the aircraft is then passed from
        the Ground Movement Planner to the Ground Movement Controller.</p></li>
<li><p>The Ground Movement Controller directs the aircraft to taxi to a
        particular holding point (also called a holding area).  The
        taxi times vary considerably (for reasons mentioned later)
        but average around 15 minutes.</p></li>
<li><p>Some holding points are simply a single-file queue at the side of
        the runway.  But at other holding points there are 2 parallel
        sub-queues: call these <em>branched</em> holding points.  When an aircraft
        arrives at an <em>unbranched</em> holding point, it simply joins the queue
        there, and passes to the responsibility of the Table-Off Controleler.
        When an aircraft arrives at a <em>branched</em> holding point, if both
        sub-queues are full the aircraft simply queues on the taxiway,
        but if not then the Ground Movement Controller either directs
        the aircraft to wait on the taxiway or to join a particular
        non-full sub-queue.  Once the aircraft is in a sub-queue at
        a branched holding point, it also passes to the responsibility
        of the Take-Off Controller.</p></li>
<li><p>When an aircraft is at the front of its holding queue (or sub-queue)
        it will be called by the Take-Off Controller at an appropriate
        time, enters the runway and takes off.</p></li>
<p><font face="arial,sans-serif,times">At present the Air Traffic Controllers observe the progress
        of aircraft on the ground directly, and progress is recorded on a
        paper strip for each aircraft.  Each controller ticks or annotates
        the strip to indicate the various actions performed under his control,
        and then passes the strip to the next controller to indicate the
        transfer of responsibility.  This procedure is changing: surface
        movement radar is being introduced, which will be able to identify
        each aircraft by its call sign and give its position and speed at
        1 second intervals.  Also the handovers of responsibility will be
        by electronic flight strips.  There are various hard and soft constraints
        and objective functions that arise in formulating the sequencing problem
        as an optimization.</font></p>
<ul><font face="arial,sans-serif,times"><li><p>Hard Constraints<br>
        These are constraints on separation of take-off times, and are absolute
                        requirements for safety reasons.</p>
        <ul><li><p>Wake Vortex Constraints<br>
        Aircraft are classified into 4 weight classes (Heavy, Medium, 
Small and Light) and the interval between take-off times of successive 
aircraft must be at least a specified limit depending on the classes of 
the 2 aircraft.  (In fact for the Take-Off constraints, the Medium and 
Small classes can be combined.)</p></li><li><p>Route Constraints<br>
        Each aircraft will be following one of 6 standard routes 
(Standard Instrument Departures, SIDs) after take-off (depending on its 
destination), and the interval between take-off times of successive 
aircraft must be at least a specified limit depending on the SIDs of the
 2 aircraft.</p></li><li><p>MDI (Minimum Departure Interval)<br>
        Occasionally, it may be necessary to impose a further minimum 
separation requirement on the aircraft following a particular SID.</p></li></ul>
</li><li><p>Soft Constraints<br>
        It appeared from the description that the CTOT slot times
        are an administrative requirement rather than a safety
        requirement.  In fact, at busy times it may not be possible
        to meet all the CTOT slots even if everything else is running
        perfectly.  If an aircraft misses, or is clearly going to miss,
        its CTOT slot then it is notified of this by the Control Tower,
        the pilot (or his airline) makes a request to Eurocontrol for
        a replacement CTOT slot, and is issued with one.</p></li>
<p><font face="arial,sans-serif,times">The objective functions that can be involved in
        measuring the performance of a sequencing system are</font></p>
<ul><font face="arial,sans-serif,times"><li><p>The 'makespan' of the aircraft presently in the system, i.e.
        the time until the last of those aircraft can take off.</p></li>
<li><p>The average delay -<br>
        The delay of a flight is calculated as follows: a nominal take-off time
        <em>t</em><sub>1</sub> is</p>
        <p align="center">
                <em>t</em><sub>1</sub> = <em>t</em><sub>request-push-back</sub> +
        <p>where the nominal taxi time depends on the distance
        of the stand from the end of the runway.  If this nominal take-off time
        <em>t</em><sub>1</sub>, is <em>earlier</em> than the start of the aircraft's CTOT
        slot then it is modified to</p>
        <p align="center">
                <em>t</em><sub>2</sub> = max(<em>t</em><sub>1</sub>, min(CTOT slot)).&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(2)
	<p>Then the delay Δ<em>t</em><sub>delay</sub> is</p>
        <p align="center">
                Δ<em>t</em><sub>delay</sub> = <em>t</em><sub>take-off</sub> - <em>t</em><sub>2</sub>.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(3)
	<p>The average delay currently used is an unweighted average over aircraft.</p></li>
<li><p>The number of missed CTOT slots.
</p></li><li><p>The number of 'overtakes' -<br>
        Aircraft A is considered to have been overtaken by aircraft B if
        A arrives at its holding queue <em>before</em> B arrives at its
        queue, but A takes off <em>after</em> B.  There is a strong
        'first come first served' tradition, so the number of overtakes
        must be kept down.  In particular it is considered poor if an
        aircraft is overtaken (in this sense) by too many others.
<p><font face="arial,sans-serif,times">The aim is to minimize all these quantities
        (makespan, delays, missed slots, overtakes).  The precise
        balance between how these are incorporated in an objective
        function is not clear (though they may be sufficiently
        correlated that it does not matter too much).  The
        'capacity' of the airport is defined as the throughput
        that can be achieved while keeping the average delay
        over the peak hours to no more than 10 minutes.  At
        present, this capacity is about 42 departures an hour.</font></p>

<h4><font face="arial,sans-serif,times">2. Sequencing decisions</font></h4>
<p><font face="arial,sans-serif,times">The decisions made during the process are</font></p>
<ul><font face="arial,sans-serif,times"><li><p>Decisions by the Ground Movement Planner:<br>
        When an aircraft requests permission to push back from its
        stand, the Ground Movement Planner determines how long to
        wait before giving permission.  He has to wait until it is
        clear, and a tug is attached, but he could wait longer if
<li><p>Decisions by the Ground Movement Controller:<br>
        The Ground Movement Controller chooses which holding point to
        direct an aircraft to.  Some aircraft may need or request the
        full length of the runway, in which case there is no choice of
        holding point.  There is also no choice if the aircraft
        approaches from a taxiway that leads to only one holding
        At a branched holding point, the Ground Movement
        Controller directs an aircraft whether to wait or to join
        a particular sub-queue.</p></li>
<li><p>Decisions by the Take-Off Controller:<br>
        The Take-Off Controller decides which of the aircraft at
        the front of the queues and sub-queues is to be called
        to take off next.</p></li>

<h4><font face="arial,sans-serif,times">3. Uncertain elements in the system</font></h4>
<p><font face="arial,sans-serif,times">A real-time sequencing system for this problem has to
        cope with uncertainties at various points, including these:</font></p>
<ul><font face="arial,sans-serif,times"><li><p>When an aircraft is given permission to push back from its stand,
        there can be a delay before it actually does so, either because
        a tug is not present, or because the aircraft is not in fact
<li><p>Aircraft taxi at different speeds.</p></li>
<li><p>On the route from the stand to the holding point, an
        aircraft may be behind a slower-moving one.</p></li>
<li><p>At Heathrow, the main arrival-departure interaction
        occurs when departures are from the north runway, and aircraft
        departing from terminal 4 have to taxi across the south runway
        between arrivals.  A departure needing to cross the south runway
        therefore has to wait for a gap between arrivals during which it
        can cross.</p></li>
<li><p>The flight crew have to describe the emergency procedures
        etc., and until they have finished doing this the aircraft cannot
        proceed beyond the holding point.</p></li>

<h4><font face="arial,sans-serif,times">4. Models of requirements</font></h4>
<p><font face="arial,sans-serif,times">To test possible sequencing algorithms for such a
        system, or for the different elements in such a system, a
        simulation needs to be available, incorporating the following
<ul><font face="arial,sans-serif,times"><li><p>Static properties:<br>
        The minimum-separation tables depending on Wake Vortex Classification, and SID.<br>
        Distances from stands to holding points.<br>
        The capacity of the sub-queues at a branched holding point.
                (This depends on the sizes of aircraft, e.g. a sub-queue
                might hold 2 Heavy aircraft or 3 Medium.)<br>
        The overtaking rules.<br>
        The delays incurred by aircraft having to wait, and manoeuvre
                into a sub-queue etc.</p></li>
<li><p>Information about each aircraft (in say a day's schedule):<br>
        Wake Vortex Classification<br>
        CTOT slot (applies to most aircraft).<br>
        Which stand it goes from.<br>
        Which holding points are possible.<br>
        The time the aircraft requests permission to push back.<br>
        The time the aircraft actually pushes back.</p></li>
<p><font face="arial,sans-serif,times">Data for a typical day may be available.</font></p>

<h4><font face="arial,sans-serif,times">5. Possibilities</font></h4>
<ol type="1"><font face="arial,sans-serif,times"><li><p>Optimal use of a branched holding point.<br>
        Considering a simple case, suppose n aircraft are queued at a branched
        holding point in which each sub-queue can hold just 1 aircraft.  Then
        there are 2<sup><em>n</em>-1</sup> possible departure sequences, rather than
        the full <em>n</em>!  Optimal sequencing of this could well be tractable by
        dynamic programming.</p></li>
<li><p>Considering the situation of idea 1., but with a continual
        supply of aircraft, have a look-up table that depends on the <em>actual</em>
        types of the front <em>k</em> aircraft queued, and the <em>proportions</em> of the different
        types in the queue following these <em>k</em>.  From that, the table should say
        which of the front 2 aircraft should be chosen for take-off.  Here a
        'type' means a combination of Wake Vortex Classification and SID, so
        there are 18 possible types, of which perhaps 12 are commonly
<li><p>Consider the idea of a schedule that starts liquid and
        then gradually gels and sets into the final departure sequence as a
        group of aircraft moves from the stands through the taxiways to the
        holding points and to take-off.  One mathematical model that might
        capture this idea could be to allocate to each aircraft a 'feasible
        take-off interval', which is initially its CTOT slot, say
        [<em>a<sub>i</sub></em>, <em>b<sub>i</sub></em>] for aircraft <em>i</em>.
        Then at the start of the day, there will be certain feasible departure
        sequences that meet the hard constraints and the feasible take-off
        intervals.  (Hopefully: if not, people can start calling Eurocontrol
        at the outset.)  However, as the situation develops in real time,
        each aircraft has an 'earliest time it could possibly be ready to
        take off', say <em>a<sub>i</sub></em>(<em>t</em>), based on the current
        time <em>t</em> and its current position (at the stand, or at a point
        on the taxiway, or in a queue).  Once this value
        <em>a<sub>i</sub></em>(<em>t</em>) exceeds <em>a<sub>i</sub></em>, the
        feasible take-off interval contracts on the left to
        [<em>a<sub>i</sub></em>(<em>t</em>), <em>b<sub>i</sub></em>].
        This contraction process will at certain points rule out some
        departure sequences, and this would give a quantitative measure of
        how the system is tightening up.  If at some point, the contracting
        constraints eliminate all feasible sequences then this would give
        the earliest indication that Eurocontrol should be called.  The
        values of <em>a<sub>i</sub></em>(<em>t</em>) can be available to
        an algorithm once the surface movement radar is in use to detect
        aircraft positions on the ground.</p></li>

<h4><font face="arial,sans-serif,times">6. The way forward</font></h4>
<p><font face="arial,sans-serif,times">Currently it is not clear how much improvement in
        sequencing performance could be gained in the three different
        areas involved: If computer-assisted sequencing were to be
        introduced either for the Ground Movement Planner in deciding
        when to release the aircraft from stands, or for the Ground
        Movement Controller in deciding how to direct the taxiing and
        holding, or for the Take-Off Controller in deciding which aircraft
        is next to be called to take off, which of these could have the
        greatest effect?  Is <em>one</em> of them a bottleneck to improving performance,
        or would any improvement only come through <em>co-ordinated</em> improvement in
        <em>each</em> of them?  How much does the uncertainty present at each stage
        limit the possible improvement in sequencing performance?  What
        kind of sequencing algorithm is appropriate for uncertainty of a
        given kind and at a given level?  These are the kind of questions
        NATS wishes to address.</font></p>

<h4><font face="arial,sans-serif,times">Acknowledgement</font></h4>
<p><font face="arial,sans-serif,times">This problem outline was a distillation of notes taken by
        David Allwright at a meeting arranged by the Smith Institute at
        OCIAM in December 2000.  Contributors to the discussion included
        representatives from Brunel University (Steve Noble), Cardiff
        University (Stuart Allen, Steve Hurley, Roger Whitaker), City University
        (Celia Glass), Oxford University (Frederic Havet, Colin McDiarmid) and
        Smith Institute (Robert Leese).  The discussion was prompted by
        John Greenwood (NATS).</font></p></html>
Algae Production – Converting Drainwater into Oyster Feed

!Problem description

The runoff water from greenhouses contains fertilizers and must be cleaned before it can be returned to the groundwater system. A means of doing this is to introduce algae into the water. The algae eat up the fertilizer and clean the water. Subsequently, the algae may be removed and sold as feed, for example, to oyster farms.

Phytocare plans to grow algae in greenhouse runoff water, both for sale as feed and to clean the water. These are two different optimization criteria: (1) maximum production of algae in a production pond, and (2) maximum depletion of contaminants in an exhaustion pond. We would like to determine the best conditions for each case.

A runoff water treatment pond is a racetrack-shaped tank, 30cm deep, with a paddle wheel at one point to keep the water flowing and mixing (see Figure). Tests confirm that the mixture is homogeneous and the algae density is independent of depth. Photosynthesis depends on the amount of light present, which is a function of the time of day, the season, the weather, and the penetration into the pond (a function of the algae density and the depth). Algae growth can be influenced by varying the supply of nutrients and by controlled harvesting. Algae density is measured 3 times per day and the pond composition is regularly analyzed in a laboratory (analysis takes 1 day).

Phytocare seeks mathematical answers to the following questions:

How does the photosynthesis model depend on pond composition in terms of pH, fertilizer content, temperature, CO2 level, and sunshine?

How can this dependence most efficiently be determined by means of experiments?

Given a photosynthesis model, what is the dynamics of phytoplankton production?

How is it influenced by weather, by nutrient supply, by harvesting?

How can we optimally control the production as a function of the weather/available sunlight by adjusting the nutrient supply?

What is the optimal harvesting strategy?

How do the optimization strategies change, depending on whether algae production or water cleaning is the goal?
* 1997, Jun 9-13: Rensselaer (USA). [[MPI 13|MPI 13]]
* 1998, Jun 8-12: Rensselaer (USA). [[MPI 14|MPI 14]]
* 1999, Jun 7-11: Delaware (USA). [[MPI 15|MPI 15]]
* 2000, Jun 4-8: Delaware (USA). [[MPI 16|MPI 16]]
* 2001, Jun 4-8: Rensselaer (USA). [[MPI 17|MPI 17]]
* 2002, Jun 3-7: Rensselaer (USA). [[MPI 18|MPI 18]]
* 2003, Jun 2-6: Worcester (USA). [[MPI 19|MPI 19]]
* 2004, Jun 21-25: Delaware (USA). [[MPI 20|MPI 20]]
* 2005, Jun 13-17: Worcester (USA). [[MPI 21|MPI 21]]
* 2006, Jun 12-16: Olin College (USA). [[MPI 22|MPI 22]]
* 2007, Jun 11-15: Delaware (USA). [[MPI 23|MPI 23]]
* 2008, Jun 16-20: Worcester (USA). [[MPI 24|MPI 24]]
* 2009, Jun 15-19: Delaware (USA). [[MPI 25|MPI 25]]
* 2010, Jun 14-18: Rensselaer (USA). [[MPI 26|MPI 26]]
* 2011, Jun 13-17: New Jersey (USA). [[MPI 27|MPI 27]]
RAPRA Technology Ltd.

!!1.1) Introduction

Understanding the transport of hazardous airborne materials within buildings and other enclosed spaces is important for predicting and mitigating the impacts of deliberate terrorist releases of chemical and biological materials. Because such materials may be acutely toxic or infectious it is important to understand how concentrations may change with time to understand the hazards that different scenarios may pose. It is also relevant to the study of accidental releases of industrial materials and the impact of environmental pollutants on indoor air quality.

A range of different numerical modelling approaches are regularly used to study these problems as well as experimental methods. Computational fluid dynamics (CFD) can be used for detailed studies of air and contaminant movement within enclosed spaces. However, CFD methods require highly detailed input data and have significant model creation and execution times. This can make them impractical for whole building studies in some cases.

Multizone models (CONTAM [1], COMIS [2] etc.) provide an alternative approach where the building is divided into a series of well-mixed volumes connected by paths through which air and contaminants can pass. These models have the advantage that they are quicker to execute than CFD models and typically require less input information. The contaminants are normally considered dilute in such approaches. Typical model size is of the order of 10-100 zones, although 1000 or more may be required in some cases.

Multizone models solve the air flow through the network (typically using a non-linear pressure solver) for a series of quasi-steady states. The contaminant dispersion resulting from the air flow solution and the contaminant initial and boundary conditions is then calculated. Whilst these models are well developed and have been validated for a range of studies, they rely on numerical methods which can become time consuming for large studies (e.g. Monte Carlo analysis). In addition, little insight is gained into the system behaviour using a numerical approach.

!!1.2) Analytical solutions to concentration equations

At Dstl we are interested in alternative analytical solutions to the transport of contaminants through multizone systems. The transport of a dilute contaminant between a number of zones of fixed volumes can be considered as an example of a compartmental system [3, 4, 5]. It can be described by a system of linear ordinary differential equations. We have found it useful to adopt the general state-space formulation:
$$\dot x = A x+B u \qquad (1)$$
where `x` is the vector of the contaminant concentrations (mass / volume) in each zone of the system, $A$ is the matrix of interzone and exhaust flows normalised by zone volumes (defined below) and `B u` describes the mapping of external concentrations and internal source terms onto the system.

`A` is defined as follows:
A = V^{-1} Q \qquad (2)
where $V$ is a diagonal matrix where $V_{i,i}$ is the volume of zone $i$ in [$m^3$] and $V_{i,j} = 0$ for $i\neq j$ .
Q = \left[\begin{array}{cccc}
-Q_{1,1} & Q_{1,2} & \cdots & Q_{1,n} \\
Q_{2,1} & -Q_{2,2} & \cdots & \vdots \\
\vdots & \ddots & \ddots & Q_{n-1,n} \\
Q_{n,1} & \cdots & Q_{n,n-1} & -Q_{n,n}
\qquad (3)
where `Q_{i,j}` is the flow into zone `i` from zone `j` when `i\neq j` and `Q_{j,j} = (-\sum_{i=1}^n Q_{i,j}-Q_{0,j})` is the flow out of zone `j`, where `Q_{0,j}` is the flow of air out of the system from zone `j`. Note that `i` and `j` take values from 1 to `n` and the index 0 represents the exterior to the building. All flow rates have units of [`m^3 s^{-1}`].

{{c{Figure 1: Ventilated and connected multizone system}}}

The definitions of `B` and `u` are slightly more complex since they include both internal sources of contaminants and external contaminants drawn into the building at a particular flow rate. A complete description is given in reference [6]. In reality there will be no control over the external concentration or internal source terms. However, it may be possible to control the volumetric flow into the building. Since this is incorporated within the termu some control over the input may be achieved, although it is likely that this would also change `A`. One related application is the case where an airborne decontaminant is introduced into the building to remediate contaminated building surfaces. In that case one problem of interest would be how to control `u` to achieve a
certain `x`.

In standard multizone models this system of equations is integrated numerically to give concentrations based on the initial conditions. For a single scenario such a calculation is typically fast to carry out. However, for some applications such as the optimisation of detector placement or the interpretation of sensor data, many thousands of individual simulations may need to be carried out. For large numbers of these repeated cases `A` will be constant, with only variation in the initial concentration or source terms `u`. In some cases there may be large differences in zone volumes, or secondary effects which introduce a wide range of timescales and require small timesteps over long periods to solve the stiff equations.

As an alternative we have been exploring the importance of the eigenvalues and eigenvectors of the matrix `A` and the explicit solution where `A` is diagonalisable. The solution to (1) can be written as follows:
$$x(t) = e^{A(t)} x(0) + \int_0^t e^{A(t-\mu)} B(\mu)u(\mu) d\mu \qquad (4)$$
where $t$ is the current time and $\mu$ is a variable of integration. In the case where the state transition matrix `A` is diagonalisable the exponential term (the state transition matrix) can then be expressed as [7]:
$$e^{At} = S e^{At}S^{-1} \\quad(5)$$
where `S` is the matrix of eigenvectors and `A` is a matrix with the eigenvalues arranged on the diagonal.

This explicit solution for the contaminant concentrations as well as the exposure (the integral of the concentration `x` for any zone) in the form of a sum of exponentials is particularly useful. Once we know the eigenvalues and eigenvectors it allows us to calculate the concentration at any future time (for a constant `A`) without iteration. This also provides the exposure (the concentration integrated with respect to time) directly. When screening a large number of scenarios this direct calculation can save time. For example, we may wish to simulate a wide range of possible source terms to ascertain their impact or to evaluate the best locations for detection systems. For some applications large numbers of scenarios may be solved in advance to form a library against which to compare information from detectors to identify the most probable source details in near-real time.

The eigenvectors and eigenvalues are also interesting directly since they provide insight into the behaviour of the system such as the late phase decay rate and concentration ratio. The eigenvalue of the smallest magnitude controls the final decay rate of the system. The dependence of this eigenvalue on system properties is of interest. Complex eigenvalues appear to arise from recirculating flow paths and result in damped

We recognise that for some systems the matrix `A` may not be diagonalisable. For example, a collection of zones with identical volumes in series and with the same flow through each of them leads to a non-diagonalisable case. This is an important case, since a series of such zones can be used in the CONTAM multizone software to represent a length of duct work to improve the time resolution of the contaminant transport. We are interested in alternative analytical forms of the solution for these cases which could add insight into the concentration dynamics.

A recent summary of some of our work in this area [6] may be useful to show the approach we are taking to the problem in more detail.

!!1.3) Specific questions

Whilst we have found this approach useful we have identified a number of areas where we would benefit from some mathematical expertise. This section lays out the specific questions we would like to explore during the study group.

# We recognise that `A` may not be diagonalisable in some cases, such as a series of zones of identical volumes with flow passing through them. Are there other cases where `A` is not diagonalisable? Is it possible to characterise the types of systems that are diagonalisable and non-diagonalisable?

# For the general case where `A` is not diagonalisable it seems that it is possible to construct an analytical solution to state transition matrix $e^{At}$ using the Jordan canonical form. Is it possible to derive a useful analytical form for the concentration solution for any matrix `A`? Many authors warn against the use of the Jordan form for practical calculations - does this rule out this approach for the solution for any `A`? For the simpler cases is there a physical interpretation?

# For the diagonalisable case is it possible to bound the values of the smallest magnitude eigenvalue based on the properties of the system matrix `A` or related system properties such as the total exhaust flow and volume? We have seen examples where this value is both larger and smaller than the system flushing rate (the air change rate).

# Is it possible to solve the inverse problem? In other words, if we have measurements of concentration (`x_i`) in one or more zones, can we establish any information about the source terms or external concentrations `u`?

!!1.4) Wider topics of interest

There are a range of wider questions of interest that we would be happy to receive
comments and input on.

# What are the practical limits to using solutions of the form (5) for large systems? We have encountered numerical problems in calculating the concentration at short times when compared to iterative schemes, for systems with around 300 zones for large condition numbers. Are there alternative approaches that can avoid these problems?

# We have explored some results for a nested two-zone case where it can be shown that the exposure in each zone is the same as the external exposure. Can this be extended to more zones? Are there other systems which lead to special case solutions?

# Is it possible to write down solutions for cases when the input function `u` is not constant (e.g. periodic, linear ramp etc.)?

©Crown copyright 2011. Published with the permission of the Defence Science and
Technology Laboratory on behalf of the Controller of HMSO.

[1] WALTON, G. N., DOLS,W. S., CONTAMW 2.4c User Guide and Program Documentation, Technical report, National Institute of Standards, 2008.
[2] HAAS, A., WEBER, A., DORER, V., KEILHOLZ,W., PELLETRET, R., COMIS v3.1 simulation environment for multizone air flow and pollutant transport modelling, Energy and Buildings, 2002, 34(9), 873–882.
[3] JACQUEZ, J. A., Compartmental analysis in biology and medicine, Amsterdam: Elsevier, 1972.
[4] GODFREY, K., Compartmental models and their application, London: Academic Press, 1983.
[5] JACQUEZ, J. A., SIMON, C. P., Qualitative theory of compartmental-systems, SIAM Review, 1993, 35(1), 43–79.
[6] PARKER, S.T., BOWMAN, V., State-space methods for calculating concentration dynamics in multizone buildings, Building and Environment, 2011, In Press, Corrected Proof, ISSN 0360-1323. URL http://www.sciencedirect.com/science/article/B6V23-521WB07-1/2/b4ed0851e8d1921707fdc8684e393d8c
[7] STRANG, G., Linear algebra and its applications, San Diego: Harcourt Brace Jovanovich, 1988.
* 1998, Feb 2-6: Queensland (Australia). [[MISG 1998|MISG 1998]]
* 2000, Jan 31- Feb4: South Australia (Australia). [[MISG 2000|MISG 2000]]
* 2001, Jan 29- Feb2: South Australia (Australia). [[MISG 2001|MISG 2001]]
* 2002, Feb 11-15: South Australia (Australia). [[MISG 2002|MISG 2002]]
* 2003, Feb 3-7: South Australia (Australia). [[MISG 2003|MISG 2003]]
* 2003, Feb 3-7: Southern Australia (Queensland). [[MISG 1999|MISG 1999]]
* 2004, Jan 26-30: Auckland (New Zealand). [[MISG 2004|MISG 2004]]
* 2005, Jan 24-28: Massey (New Zealand). [[MISG 2005|MISG 2005]]
* 2006, Jan 30- Feb3: Massey (New Zealand). [[MISG 2006|MISG 2006]]
* 2007, Feb 5-9: Wollongong (Australia). [[MISG 2007|MISG 2007]]
* 2008, Jan 28- Feb1: Wollongong (Australia). [[MISG 2008|MISG 2008]]
* 2009, Jan 27-31: Wollongong (Australia). [[MISG 2009|MISG 2009]]
* 2010, Feb 7-12: Wollongong (Australia). [[MISG 2010|MISG 2010]]
* 2011, Feb 6-11: Melbourne (Australia). [[MISG 2011|MISG 2011]]
Example: Solving the quadratic equation.
Suppose a x^2+b x+c=0 and a!=0. We first divide by \a to get x^2+b/a x+c/a=0. 

Then we complete the square and obtain x^2+b/a x+(b/(2a))^2-(b/(2a))^2+c/a=0. The first three terms factor to give (x+b/(2a))^2=(b^2)/(4a^2)-c/a. Now we take square roots on both sides and get x+b/(2a)=+-sqrt((b^2)/(4a^2)-c/a).

Finally we move the b/(2a) to the right and simplify to get the two solutions: x_(1,2)=(-b+-sqrt(b^2-4a c))/(2a)
Critical Software has defined a Balanced Scorecard to help conduct, monitor and control its strategic management. A global objective for the company to achieve is defined, and depends on a given number of secondary objectives, which may depend on yet another level of underlying objectives to be fulfilled.

For each one of these objectives, there exists a threshold defining whether the goal has been achieved. Additionally, correlations between the several underlying objectives have already been identified. However, the dependence between each objective has not yet been precisely determined or estimated. The current estimates rely on heuristic knowledge and daily working experience with the treatment of the current models.

The problem under consideration is to define a model which enables Critical Software to improve its knowledge in what concerns:
# the understanding of the dependence between each of the current underlying objectives, as well as the identification of previously unknown correlations between such objectives;
# the existence of hidden underlying objectives which may be relevant for the global objective, but are not being taken into consideration in the present model;
# the estimation of each underlying objective influence on every other objective.
Philips Natlab is looking for new ways to compress audio-signals.

A new method for the digital representation of high quality audio signals has been introduced as an alternative to the widely used 16-bit recording format used for CD signals. This new method [3] produces 1-bit samples at a rate that typically is 64 times higher as for CD. For CD the samples are generated at a rate of 44.1 kHz.

The new method results in a raw audio data volume which is 4 times as large as for ordinary CD signals. New storage media provide a huge storage capacity, nevertheless it is beneficial to reduce the required storage capacity. Since the new format is intended for high quality audio signals, popular compression techniques that do change the signals, lossy coding, are unacceptable. This opens up a whole new research area of lossless coding of 1-bit audio signals.

Currently, two main methods have been developed for lossless coding of such 1-bit audio streams [1, 2]. The first, low complexity, scheme uses an adaptive prediction table with run-length residual signal coding. The latter, more elaborate, scheme uses linear prediction with arithmetic coding of the residual signal. In combination with buffering techniques, the methods realise typical average coding gains of 1.3 and 2.1, respectively.

Can we make compression methods that do better?

To evaluate new proposals, a few short excerpts of 1-bit audio signals will be made available, together with the coding gains achieved with the methods mentioned above.

[1] F. Bruekers, W. Oomen, R. van der Vleuten, and L. van de Kerkhof. Lossless coding of 1-bit audio signals. AES 8th Regional Convention, Tokyo, Japan, 1997.

[2] F. Bruekers, W. Oomen, R. van der Vleuten, and L. van de Kerkhof. Improved lossless coding of 1-bit audio signals. AES 103rd Convention, New York, 1997.

[3] J. C. Candy and G. C. Temes, editors. Oversampling Delta-Sigma Data Converters: Theory, Design, and Simulation. IEEE, 1992. 
|Created by|SaqImtiaz|
|Version|0.5 beta|
A replacement for the core timeline macro that offers more features:
*list tiddlers with only specfic tag
*exclude tiddlers with a particular tag
*limit entries to any number of days, for example one week
*specify a start date for the timeline, only tiddlers after that date will be listed.

Copy the contents of this tiddler to your TW, tag with systemConfig, save and reload your TW.
Edit the ViewTemplate to add the fullscreen command to the toolbar.

{{{<<timeline better:true>>}}}
''the param better:true enables the advanced features, without it you will get the old timeline behaviour.''

additonal params:
(use only the ones you want)
{{{<<timeline better:true  onlyTag:Tag1 excludeTag:Tag2 sortBy:modified/created firstDay:YYYYMMDD maxDays:7 maxEntries:30>>}}}

''explanation of syntax:''
onlyTag: only tiddlers with this tag will be listed. Default is to list all tiddlers.
excludeTag: tiddlers with this tag will not be listed.
sortBy: sort tiddlers by date modified or date created. Possible values are modified or created.
firstDay: useful for starting timeline from a specific date. Example: 20060701 for 1st of July, 2006
maxDays: limits timeline to include only tiddlers from the specified number of days. If you use a value of 7 for example, only tiddlers from the last 7 days will be listed.
maxEntries: limit the total number of entries in the timeline.

*28-07-06: ver 0.5 beta, first release

// Return the tiddlers as a sorted array
TiddlyWiki.prototype.getTiddlers = function(field,excludeTag,includeTag)
          var results = [];
          if(excludeTag == undefined || tiddler.tags.find(excludeTag) == null)
                        if(includeTag == undefined || tiddler.tags.find(includeTag)!=null)
                   results.sort(function (a,b) {if(a[field] == b[field]) return(0); else return (a[field] < b[field]) ? -1 : +1; });
          return results;

//this function by Udo
function getParam(params, name, defaultValue)
          if (!params)
          return defaultValue;
          var p = params[0][name];
          return p ? p[0] : defaultValue;

window.old_timeline_handler= config.macros.timeline.handler;
config.macros.timeline.handler = function(place,macroName,params,wikifier,paramString,tiddler)
          var args = paramString.parseParams("list",null,true);
          var betterMode = getParam(args, "better", "false");
          if (betterMode == 'true')
          var sortBy = getParam(args,"sortBy","modified");
          var excludeTag = getParam(args,"excludeTag",undefined);
          var includeTag = getParam(args,"onlyTag",undefined);
          var tiddlers = store.getTiddlers(sortBy,excludeTag,includeTag);
          var firstDayParam = getParam(args,"firstDay",undefined);
          var firstDay = (firstDayParam!=undefined)? firstDayParam: "00010101";
          var lastDay = "";
          var field= sortBy;
          var maxDaysParam = getParam(args,"maxDays",undefined);
          var maxDays = (maxDaysParam!=undefined)? maxDaysParam*24*60*60*1000: (new Date()).getTime() ;
          var maxEntries = getParam(args,"maxEntries",undefined);
          var last = (maxEntries!=undefined) ? tiddlers.length-Math.min(tiddlers.length,parseInt(maxEntries)) : 0;
          for(var t=tiddlers.length-1; t>=last; t--)
                  var tiddler = tiddlers[t];
                  var theDay = tiddler[field].convertToLocalYYYYMMDDHHMM().substr(0,8);
                  if ((theDay>=firstDay)&& (tiddler[field].getTime()> (new Date()).getTime() - maxDays))
                     if(theDay != lastDay)
                               var theDateList = document.createElement("ul");
                               lastDay = theDay;
                  var theDateListItem = createTiddlyElement(theDateList,"li",null,"listLink",null);

A legal obligation for the Pharmaceutical Industry is to monitor the quality of their products during its life cycle. The activity of development new pharmaceutical products (generics) leads to a high number of batches to test to accomplish with an ICH Stability Program and the timelines are critical. The problem is how to manage and schedule the tests of several projects that run at the same time, to avoid delays in the answers.
* 1997, Aug 25-29: Vancouver (Canada). [[IPSW 1|IPSW 1]]
* 1998, Jun 1-5: Calgary (Canada). [[IPSW 2|IPSW 2]]
* 1999, May 31- Jun4: Victoria (Canada). [[IPSW 3|IPSW 3]]
* 2000, May 29- Jun3: Edmonton (Canada). [[IPSW 4|IPSW 4]]
* 2001, May 18-22: Seattle (Canada). [[IPSW 5|IPSW 5]]
* 2002, May 27-31: Vancouver (Canada). [[IPSW 6|IPSW 6]]
* 2003, May 25-29: Calgary (Canada). [[IPSW 7|IPSW 7]]
* 2004, May 17-21: Vancouver (Canada). [[IPSW 8|IPSW 8]]
* 2005, May 15-19: Calgary (Canada). [[IPSW 9|IPSW 9]]
* 2006, Jun 26-30: Simon Fraser (Canada). [[IPSW 10|IPSW 10]]
* 2007, Jun 11-15: Alberta (Canada). [[IPSW 11|IPSW 11]]
* 2008, Jun 16-20: Regina (Canada). [[IPSW 12|IPSW 12]]
* 2009, May 20-24: Calgary (Canada). [[IPSW 13|IPSW 13]]
To have a successful shopping experience in an hypermarket, customers need to be able to efficiently find the articles they are looking for and to quickly check-out of the store. In order to serve their customers (check-out and pay) in a cost-effective way, retailers need to configure and size their check-outs solutions by defining: the number of checkout posts; what type of checkout: normal and self service; how many check-outs of each type. The main goal is to analyse the ideal check-out configuration for the desired service level.
Modeling the Production Process of Nuggets

!Stork Food Systems
Stork Food Systems is the supplier of processing systems for poultry and fresh meat. As a technology market leader, with more than 45 years of experience, process knowledge and proven track record worldwide, we support and equip food processors to create maximum process value now and in the future.

!Problem description
In meat processing equipment flow of meat mass is an important aspect. Meat mass will flow through tubes, pass bends and orifices, diverge or converge in manifolds, enter moulds, etc. Optimization of equipment to attain stable, high-end product quality requires control of this meat mass flow. We would like to be able to better predict how meat properties affect flow in our forming (moulding) machines, where meat mass is pressed in moulds during mould opening and flow is a start-stop phenomena.


See examples of our machines.

Meat masses have the following physical properties:
# Visco-elastic
# Compressible
# Strongly inhomogeneous
# Strongly temperature dependent

These properties together cause a very complex reaction to deformations (flow). Especially in non-continuous flow, where the value of different parameters and even the equipment itself never stabilizes, this leads to high complexity. Complexity hampers prediction of flow and final product quality and, as a result, optimization of equipment.

Nowadays, many flow problems are solved by Computational Fluid Dynamics (CFD). However, meat mass properties are very complex and CFD as a modeling tool is too complex and time consuming for such a purpose. It would not be unthinkable that there are other ways to come to a model that sufficiently predicts flow. There might be analogies with other fields of expertise that indicate possible solutions that we are not aware of this moment.

The goal of this assignment is to develop a mathematical model that predicts non-continuous flow of visco-elastic, compressible meat mass in simple geometries. Pressure fluctuation, deformation rates, mould filling rates and final product weight should be important parameters in a model.
# [[The dynamics of liquid slugs forced by a syringe pump|Claremont 2009: The dynamics of liquid slugs forced by a syringe pump]] Siemens Healthcare
# [[Identifying biomarkers for exposure to environmental contaminants|Claremont 2009: Identifying biomarkers for exposure to environmental contaminants]] EcoArray, Inc.
# [[Wicking in Microchannels on Biochips|Claremont 2009: Wicking in Microchannels on Biochips]] Akonni Biosystems
# [[A two-base encoded DNA sequence alignment problem in computational biology|Claremont 2009: A two-base encoded DNA sequence alignment problem in computational biology]] National Institute of Genomic Medicine, México
# [[Optimization Techniques for the Power Beaming Analysis of Microwave Transmissions from a Space-Based Solar Power Satellite|Claremont 2009: Optimization Techniques for the Power Beaming Analysis of Microwave Transmissions from a Space-Based Solar Power Satellite]] The Boeing Company
# [[Compact Modeling for a Double Gate MOSFET|Claremont 2009: Compact Modeling for a Double Gate MOSFET]] Information Science Institute
# [[Procedure for improving wildfire simulations using observations|Claremont 2009: Procedure for improving wildfire simulations using observations]] USDA Forestry Service
Presented by Claudia Rangel, National Institute of Genomic Medicine, México

Computational methods in the field of biology have become a key factor since the advent of the human genome project. Since then many other genomes have been sequenced generating a wide variety of sequence analysis problems. The sequencing of the human genome and the Hap-Map project have impacted the study of human disease in significant ways and enable many genome-wide association studies that aim to elucidate the genetic component of complex diseases. The recent introduction of instruments capable of producing millions of DNA sequence reads in a single run is rapidly changing the landscape of genetics, providing the ability to answer questions with heretofore unimaginable speed. SOLiD’s two-base encoding scheme means that data is first collected in “color space,” in which the color provides information about two adjacent bases that must then be decoded into sequence data.

DNA sequences are strings from a four letter alphabet called nucleotides {A, C, G, T }. The length of a sequence is variable and sometimes we require the alignment of lengthy, highly variable or extremely numerous sequences. To construct algorithms to produce high-quality sequence alignments using only four letters becomes a real challenge. Computational approaches to sequence alignment are classified as: global or local alignments. Global alignment means that the alignment spans the entire length of all query sequences. Local alignments identify regions of similarity within sequences divergent overall making them a better choice but a more complex one in terms of algorithms.

A variety of computational algorithms have been applied to the sequence alignment problem. For global alignment the Needleman-Wunsch algorithm has been widely used. Local alignment has its most famous Smith-Waterman algorithm based on dynamic programming. Pair-wise sequence alignment methods are used to find the best-matching piecewise (local) or global alignments of two query sequences, they are efficient to calculate and are often used for methods that do not require extreme precision. Three common methods of producing pair-wise alignments are dynamic programming, dot-matrix methods, and word methods. Multiple sequence alignment is an extension of pair-wise alignment that incorporates more than two sequences at a time but can also align pairs of sequences.
Challenges: Not all sequences are of the same length. Sequences can have substitutions, insertions and deletions and therefore algorithms should include the possibility of gaps. There are some biological assumptions about the start and the end of a sequence that are useful for algorithm development. However, there could be gaps at these positions as well. Next generation sequencing technologies have implemented a color based approach leading to a two-base encoded type of data.

Search for a new algorithm that facilitates the use of two-base encoded data for large-scale re-sequencing projects. This algorithm should be able to perform local sequence alignment as well as error detection and correction in a reliable and systematic manner, enabling the direct comparison of encoded DNA sequence reads to a candidate reference DNA sequence.

This sequence alignment problem, on the surface, resembles a string matching problem. Needleman-Wunsch algorithm uses dynamic programming. There are similar problems in Negative Selection (an algorithm from
Artificial immune systems) and they too are solved using dynamic programming.
There is a similar problem involving the assembly of DNA restriction fragments. We used a genetic algorithm. When we applied GA’s on a test problem where the correct sequence is known, the GA gave 4 suboptimal solutions. In DNA studies sub-optimal problems are no good; we need the correct sequence. One of the four “sub-optimal” sequences is the correct solution, but with the fitness function we used, we could not narrow it down to one optimal solution. But it is possible that GA may work if we tinker around with a few fitness function variations. Rao Vemuri UC Davis

!!Final Report:
Title: A two-base encoded DNA sequence alignment problem in computational biology ([[PDF|http://ccms.claremont.edu/files/Math-in-Industry-2009-Workshop-Proble4.pdf]])
Presented by Henok Abebe, Information Science Institute

A compact, physical, potential model for undoped (or slightly doped) short-channel Double Gate (DG) MOSFETs is required. Most transistors go by the name of MOSFET (metal-oxide-silicon field-effect transistor). The current design has various features that are proving undesirable as technology mandates the endless reduction in transistor size. There is extensive research and development underway with new designs. One which looks favourable is the double gate.

Current methods of determining device characteristics for a single transistor include numerical solutions of the partial differential equations governing electron transport. (Quantum effects are now important, but this aspect is not intended for this workshop.) While numerical treatments are satisfactory in providing accurate results, they do not provide a framework for analyzing device parameter optimization.
This becomes increasingly important as the scale of the devices drops to the nanoscale regime, and as the number of parameters increases. Additionally, the time consumption of numerical solutions often precludes their use in SPICE, the simulation software used to obtain chip performance, when the chip contains many transistors. (This number can now approach 1 billion.)

Almost all the analytical solutions in the literature treat the long-channel case (a thin device) for which the PDEs reduce to ODEs, and there is a scarcity of solutions in the short-channel (PDE based) case. An analytical, physically based PDE model (or at least a model that is extremely fast numerically) would allow determination of optimal parameters for device performance, allow a reduction of the amount of time to determine device characteristics, and be available for use in SPICE.

Required is a solution to the following PDE system.

$ ( \frac{\partial^2}{\partial x^2} + \delta^2 \frac{\partial^2}{\partial y^2})\omega = \sigma^2 \exp{[( {w} - \varphi)]} $   (1)
$ \frac{\partial}{\partial x}(e^{w-\phi} \frac{\partial \phi}{\partial x}) + \delta^2 \frac{\partial}{\partial y} (e^{w-\phi} \frac{\partial \phi}{\partial y}) = 0 $ (2)

with $ \delta $, $ \sigma $ small, and boundary conditions given on a rectangle |x|<1, |y|<1. These are that on
x=1,-1 $ w $ and $ \varphi $ are given, and on y=1,-1 $ w $ satisfies a Robin condition and $ \partial \varphi / \partial y = 0 $. The modeling and scaling that gives rise to these equations is explained in this paper. We have numerical solutions to these equations and these are available here. They show electron densities ($ \exp{(w-\varphi)} $) and current flow, which is an integral of this density.

!!Final Report:
Compact Modeling for a Double Gate MOSFET
Presented by John Rogers, EcoArray, Inc.

EcoArray is an environmental testing company using microarrays to observe the changes in gene expression that occur in fish that are exposed to various chemicals.  Gene expression profiling is a more complex version of the idea of the canary down the mineshaft.  This testing is promising because current tests observe either the presence of a chemical in water or the death rate of a sentinel species exposed to a chemical.  Gene profiling speaks to biological impact (rather than just presence in the environment) and is considerably more sensitive (and quick) than counting dead fish.  

The microarray I will concentrate on for the statement of the problem is our 15,000-gene array in fathead minnows (“FHM”), the most common “sentinel species” for fresh water testing used in the United States.   We have exposed FHM to various levels of 14 chemicals representing chemical families of concern in environmental studies (PCB-126, several metals, herbicides, estrogen, androgen, pesticides).  The task is to determine patterns of expression in those 15,000 genes that distinguish each chemical.  We call the unique group of genes that is differentially expressed for each chemical its biomarker fingerprint.  Ideally, we would like to be able to test a small sample of fish from a water source and determine what chemicals are affecting that fish … identifying more than one chemical if such is the case. 

I have attached a brief overview of how a microarray is processed.  After applying RNA from the fish treated with specialized dyes, the microarray is scanned by a specialized scanner, which produces data in the form of intensity statistics.  When the data is produced, it goes through two stages of analysis.  The first is normalization, in which the signal read from the scanner is converted into usable data.  This is done in software associated with the scanner.  The second stage is analysis, or bioinformatics.  We use a fairly sophisticated software, GeneSpring GX version 10, that allows for a great deal of statistical manipulation and visualization.  GeneSpring is powerful for analyzing specific experiments and for filtering data sets. 

In the bioinformatics stage of analysis, intensity ratios for genes of fish exposed to known contaminants are then compared to ‘control’ (null) expression intensity. A gene can be up-regulated or down-regulated compared to the control.  A good biomarker is one which is significantly different from a control value for one single chemical and which responds only to that chemical.
We have done some preliminary work on 7 of the chemicals.  From this we have determined several things:

   1. Good biomarkers are relatively rare and vary widely from chemical to chemical.  In a study of the first 7 of our chemicals, we found that 2 chemicals have only 6 usable biomarkers (of which only one apiece is unique to that chemical), whereas hormones show hundreds of marker genes.  

!!Final Report
Title: Identifying Bio-markers for EcoArray ([[PDF|http://ccms.claremont.edu/files/MIIreportEcoArray.pdf]])
Presented by Seth Potter, The Boeing Company

!!Power Beaming Analysis
This problem is related to the transmission of electromagnetic waves for power beaming, with possible applications to communications, navigation, and radar. The goal is to relate the current distribution across the face of a circular phased-array antenna to the energy distribution at the receiver site; i.e., beam tapering. The purpose is to produce a beam whose energy is directed to where it is most useful, while minimizing sidelobes. There are potentially three areas that would be useful to model, and each follows from the preceding one. The Workshop may decide tackle one, two, or all three, to whatever extent is practical. They are:

# Given the current distribution across the face of a phased-array antenna expressed as any arbitrary mathematical function, compute the energy distribution at the receiver in the far field. Emphasize circular transmitters with azimuthally symmetric current distributions. In that case, the received energy distribution will be proportional to the ~Fourier-Bessel transform of the transmitting antenna current distribution (except that it is integrated over the finite area of the transmitting antenna). For a uniformly-illuminated transmitting antenna, the received beam in the far field is of the form $ \{ \frac{J_1(r)}{r} \}^2 $, where $ r $ is the dimensionless radial distance from the center of the beam pattern at the receiver site (in the plane of the receiver site) and $ J_1 $ is the first-order Bessel function of the first kind. A suggested solution is to use superposition of these functions for various scales of $ r $. This has been modeled at Boeing for a family of special cases where the transmitted beam is a stepped Gaussian. A more general solution is desired. Such a solution may also be useful to model other types of circular antennas with approximate azimuthal symmetry; e.g., a parabolic dish with a blocked aperture.
# Reverse the process of Problem 1 so that given a desired energy distribution at the receiver, the transmitted beam taper needed to produce it can be computed. Often, there will not be a solution if the transmitting antenna size is overly constrained (typically because the transmitter needed may be impracticably large), so it would be desirable to be able to tell by brief inspection or estimation if a desired solution is feasible.
# Extend Problem 1 (and Problem 2, if time permits) into a sparsely-filled array, or equivalently, a group of formation-flying satellites transmitting coherently toward a common target. This will produce multiple lobes, so the extension of Problem 1 is -- given a group of such satellites, what is the lobe pattern? The extension of Problem 2 is -- given a group of receiving antenna arrays (e.g., rectifying antennas for power beam reception) at some arbitrary set of locations on the ground, can we formation-fly a group of satellites that can form a lobe pattern that will fit them?

!!Final Report:
Optimization Techniques for the Power Beaming Analysis of Microwave Transmissions from a Space-Based Solar Power Satellite ([[PDF|http://ccms.claremont.edu/files/ccms_workshop_report_boeing.pdf]])
Presented by John Benoit, USDA Forestry Service
!!Hot Mathematical Challenges in Wildland Fire Science

Since 2000, the cost of firefighting to the USDA Forest Service has more than doubled what it was in the last decade, the annual average cost now exceeding a billion dollars.  As a result, the agency recently implemented the Wildland Fire Decision Support System (WFDSS), which utilizes modeling and simulation to predict probabilistic fire spread for fires expected to incur significant costs.  Still in prototype, the WFDSS design will incorporate technologies in weather and fire behavior modeling, geospatial analysis, and remote sensing.

WFDSS will provide a risk assessment system for tactical fire planning during the course of one or more fire incidents.  Fire behavior prediction is accomplished by a computer model which accounts for fuel, weather and terrain variations.  Fire spread probabilities are derived from the compilation of numerous fire spread simulations driven by multiple weather scenarios.  Expected losses are computed from the mapped values at risk that fall within the footprint of the fire spread probabilities.  The resultant quantification of fire risk will provide a basis for cost-effective management of wildfires.

Wildland fire risk assessment is still in a formative stage, faced by many and varied mathematical challenges.  The fire behavior model that WFDSS uses is based on laboratory experiments carried out approximately 40 years ago.  WFDSS weather scenarios are based in part on point climatologies that are unable to describe complex wind patterns in mountainous terrain.  Work is currently under way to use high resolution weather models to predict weather conditions likely to influence fire behaviour.

The fire spread probability maps only partially reflect the uncertainties in weather conditions that affect fire behaviour, and do not account at all for the uncertainties of fire model mis-specification, other erroneous model inputs, or measurement error, such as location of the fire ignition point.  The spatial/temporal characteristics of the random errors in fire spread models are yet to be described.  Given the pitfalls that come with the use of models, a question of significant operational interest is:  Can the accuracy of a flawed modelling system be substantially improved by a recursive mathematical updating technique (e.g., a Kalman filter)?

Answers to these and other questions related to fire risk assessment would profoundly benefit wildland fire management in the US and everywhere else in the world where wildfire poses a public safety, economic, and ecological threat.

!!Final Report:
Title: Procedure for improving wildfire simulations using observations ([[PDF|http://ccms.claremont.edu/files/Math-in-Industry-2009-Workshop-Problem7.pdf]]) 
Microfluidics technology has recently gained a lot of popularity in biotechnology and life sciences. One of the areas affected by microfluidics technology at Siemens Healthcare is biomarker research for Positron Emission Tomography (PET) technology. Biomarkers are small molecules labeled with radio-isotopes and used in PET imaging, an innovative technology used in cancer treatment and diagnosis. Biomarkers are capable of penetrating cancerous cells to identify drug uptake and provide other vital information about the progress of treatment. Production of Biomarkers involves several radio-chemical processes.

A microfluidics based biomarker synthesizer unit has been developed at Siemens Healthcare Biomarker Research division. The unit consists of a multi-port reactor used for synthesis of biomarkers. The reactor is filled with different reagents at different stages of synthesis using 0.05 cm diameter Teflon tubing connected to each port. Typically, Teflon tubes are filled using automated syringe pumps which can deliver 10-100 micro-liters of reagents as single or multi-slugs with high volumetric accuracy (less than 1%). As tubes are filled with slugs of reagents, Nitrogen gas is used to push the slugs to the reactor. Length of the tubing varies from 20 to 500 cm. In order to transfer a slug to the reactor, a syringe filled with nitrogen is connected to one end of the tube and nitrogen is dispensed at a constant rate of (micro-liters/s) to one end of the tube creating pressure behind the slug and moving it towards the reactor.

!!Problem Statement:
For a give inner diameter of Teflon tube d (fixed at 0.05 cm), and a syringe with a dispensing rate of v, find the governing equations for the motion of a single slug of length l (approximate value 0.5 cm) inside the tube and discuss key parameters affecting the velocity of the slug inside a straight tube (e.g., static and dynamic contact angles of the slug within the tube). Assume atmospheric pressure at the other end of the tube. Use water for all liquid properties. For initial conditions, consider atmospheric pressure in both sides of the slug and syringe being at rest and reaching v almost instantaneously. Supply of nitrogen via syringe can be assumed continuous and unlimited.

!!Final Report:
Title: The dynamics of liquid slugs forced by a syringe pump ([[PDF|http://ccms.claremont.edu/files/siemens-report-final.pdf]])

!!Final Presentation:
Title: Siemens Health Care
Slugging Along Tube ([[PDF|http://ccms.claremont.edu/files/siemens-report-final-presentation_0.pdf]]) 
Presented by Chris Cooney, Akonni Biosystems

A microfluidic biochip device has been designed and proven to withstand temperatures up to approximately 90 deg C.   Although effective for some of our applications, there is a desire to increase this temperature an additional 5 to 7 deg C.  The chamber design (see the schematic below) consists of an inlet, a reaction chamber, a waste chamber, a channel connecting the reaction chamber to the waste chamber, and a vent hole.  

The reaction chamber volumes are low enough that surface tension dominates.  The current microfluidic design does not require hydrophobic stops for the liquid containment, and for manufacturing reasons, we prefer not to have to add such a feature.  The inlet is sealed prior to the increase in temperature elevation, but the vent remains open.  What should the shape of the connecting channel be in order to prevent the liquid in the reaction chamber from entering the waste chamber during a 5 minute temperature elevation step?

* Prefer not to have to treat the surface to create a hydrophobic stop
* For manufacturing purposes the geometries should not have features less than 0.2 mm
* Must be a closed system except for a vent hold and an inlet hole


!!Final Report: 
Akonni Biosystems: Wicking in Microchannels on Biochips ([[PDF|http://ccms.claremont.edu/files/akonni-report.pdf]])
* 2009, Jul 27-31: Claremont (USA). [[Claremont 2009|Claremont 2009]]
with Boston Scientific

Boston Scientific are interested in understanding the mechanism involved when a volatile coating solution is coated onto a cylindrical wire lattice mounted on a mandrill (see photo). The coating solution used has a high viscosity (approx range 100 centipoise) with a 20% solids solution in a volatile solvent. The lattice and mandrill can be coated with alternative coating processes. The mandrill is a very smooth PTFE fixture and the lattice sits snugly over the mandrill. The lattice has 18 'diamonds' in its circumference measuring about 2mmx3mm.
The diamond dimensions are fixed. The percentage of solids may vary with evaporation (higher percentage solids over time). Layering on the coating appears to work better, prevents holes forming , than application over a short period (4 minute). The coat weight is approx 1 gram and appears to depend on the coating process. The thickness of the coating in the middle of the 'diamond' varies from 20um to 50um approx.
Background: #fff
Foreground: #000
PrimaryPale: #8cf
PrimaryLight: #18f
PrimaryMid: #04b
PrimaryDark: #014
SecondaryPale: #ffc
SecondaryLight: #fe8
SecondaryMid: #db4
SecondaryDark: #841
TertiaryPale: #eee
TertiaryLight: #ccc
TertiaryMid: #999
TertiaryDark: #666
Error: #f88
The project has been proposed by a public sector institution. The main challenge is to design aggregated indicators to measure the quality of service rendered by the Wholesale Telecommunication Operator to Alternative Operators (AO). The indicators must be comparable, i.e. they should indicate whether some AO’s are favored or discriminated.
!The 'holey cheese' problem
One of the steps in the design process of chips is the positioning of every single component or 'cell' on the chip. The cells are mutually connected by wires. The wiring scheme is given, and in this phase of the design process the positioning of the various cells must be determined. Some (relatively few) cells have a prescribed position.

For the classical positioning problem one considers the chip as a two-dimensional plane; the cells are modelled by rectangles of various sizes. The positioning has to satisfy some conditions:

# the cells must be placed within a certain rectangle (core area)
# cells are not allowed to overlap
# the total wire length must be minimized. 

For this problem many algorithmes are known, each one with its specific pros and cons.

The problem becomes more difficult when large parts of the core area are excluded from positioning, often caused by large, functional components that were placed beforehand (one could think of memory, or components that are designed by other companies). The remaining 'free area' within the core area is usually comparable to a cheese with holes, or 'holey cheese'. Obviously, the cells cannot be placed on the blockages, and this additional requirement makes the positioning problem significantly harder.
The picture shows an example of a typical 'holey chees'. The green areas are allowed, the white ones are blocked. (The 'noise' in the green areas has nothing to do with the free space.) Note that the free area is strongly disconnected.

The current algorithmes of Magma DA suffice for more or less convex areas. However, in 'holey cheeses' they often end up in local minima that are far from optimal.

How can Magma DA find good solutions in case of holey cheeses? 
The Artis Zoo has a problem in its Aquarium and in the adjacent Zoological Museum.

Part of the Aquarium is a corridor which contains so called mammoth tanks which measure 5 by 2.5 by 20 meters and are filled with water. Because of the tropical fish inside, the water should have a temperature of 24 degrees Celsius. As there is not much daylight, a dozen lamps have been placed just above the aquarium to make sure the fish inside are visible. However, these big lamps produce a lot of heat.

When in summer time the outside temperature reaches 25 degrees, the temperature in the corridor containing the mammoth tanks increases up to 30 degrees. The water itself becomes 27 degrees, which is too much for the fish inside. In the neighbourhood of the lamps, the temperature rises to 40 degrees. Just under the roof sometimes temperatures of 60 degrees have been measured.
The museum, adjacent to the aquarium, is suffering from the heat as well: a lot of objects (like stuffed animals) are no longer allowed to be displayed.
There are some fans, but these can not do the job, especially not if doors are opened to fight the heat.

How can we change this situation with a minimum of cost and inconvenience for the visitors, employees and fish? 
An important cause of bulk carrier loss [1] is ingress of sea water owing to cracks in the side shell as a consequence of main frame deterioration and the subsequent failure of transverse bulkheads adjacent to the flooded hold. When the hold is flooded the stresses in the bulkhead are known to exceed yield and the bulkhead to have undergone slight distortion [2]. The vertical bulkheads are corrugated and have upper support from cross deck strips. Typical initial damages are cracks in the joint between bulkhead corrugations and deck plating due partly to corrosion. These can lead to portions of the cross deck strip becoming detached from the bulkhead. Pressure on the ship sides can then cause the bulkhead and cross deck strip to buckle, with minor and local buckles on the bulkhead leading to shear buckling of the whole structure.

The purpose of the Study Group will be an investigation of this cracking and buckling process.

[1] Det Norske Veritas report, Bulk carrier losses, Nov. 1991
[2] Lloyd's List, Letter from J Bell (IACS President), Feb. 1997
The main project objective is to apply advanced mathematical methods, especially cryptographic techniques used in the process of digital marking of content, to guarantee verification of the integrity of long term stored digital content.
A system that occurs in several contexts in computer and data networks has the generic form shown in Figure 1.

{{c{Figure 1: Diagram of system}}}

There are `N` sources, and each source `n` transmits bursts of data at speed `R_n` packets a second, and they share a medium to which access is controlled by a queueing discipline. The queue has a limited buffer space `B`, and so in some conditions packets are dropped. The system feeds back to source `n` the proportion `p_n` of its packets that were dropped, and the source then adjusts its sending rate according to the formula
$$ R_n(p_n) = \frac{K}{ RTT_n \sqrt{Cp_n+D} }. $$
Here `K, C` and `D` are to be treated as known dimensionless constants, and $RTT_n$ is the round-trip-time of the packets, from the time source `n` sends them to the time it receives the feedback that a proportion `p_n` of them were dropped.

The problem is to find the equilibrium point $(R_n(p_n), p_n, RTT_n)$, and also information about the statistics of the distribution, eg the variance of the packet loss distribution.

!Detailed models:
There are 2 cases to consider:
# The `N` sources //always// have data to send.
# The `N` sources each switch between ON/OFF according to independent Markov chains, and when source n is ON, it sends data in busts at rate `R_n`.
!!!!Packet discard policies
There are 2 packet discard policies that might be in operation:
# Drop-tail: this simply means that if a packet arrives when the buffer is full then it, and any subsequent packets of the same burst, are dropped.
# Random Early Discard (RED): If the buffer occupancy exceeds some threshold `B_t, (B_t < B )`, then arriving packets are dropped with a probability varying linearly between 0 at Bt and 1 at B, different packets being independent.
The graphs of packet drop probability against buffer occupancy in these 2 cases are illustrated in Figure 2
{{c{Figure 2: Packet loss probability for the two packet discard policies}}}

!!!!Queueing disciplines
There are 3 queueing disciplines of interest:
# FIFO: First in-first out. This is the simplest case in which all packets form a single queue and are dealt with in order of arrival.
# Round Robin: Here, there are `N` queues, one for the packets from each source. Packets are serviced one from each queue in turn, in a fixed cyclic order, skipping any queues that are empty.
# Giving priority to flows with the least service time: In this case, the queue manager system has information about the average transmission rates for the packets from each source, and gives some weighted priority to those packets that can be transmitted fastest.

The queueing patterns in the first 2 cases are illustrated in Figure 3
{{c{Figure 3: FIFO and round robin queueing systems}}}

!Service times
The service time per data packet is random, and for the purposes of this study it can be treated as exponentially distributed with a known mean service rate `mu_n`, different packets being independent.

!Priorities for the Study Group:
The combinations of cases, in order of most interest, are
# Drop-tail and FIFO
# Drop-tail and Round Robin
# Random Early Discard and FIFO
# Random Early Discard and Round Robin
Many areas of forensic science deal with questions concerning the selection of evidence. If certain features are used to select a suspect from a large group of potential suspects, can you use the same features as evidence in a legal case against the suspect? If the answer is yes, is the evidential value the same as in cases where the suspect is selected through other independent evidence, or should the expert "correct" the evidential value for a "selection effect"? This question also applies in situations where trace evidence is selected.

For example, in fiber casework a "foreign" fiber found on a murder victim may be compared to a few items related to the suspect, such as his clothing, or to a large number of items, e.g. when his whole house, office and car is searched. When an item is found which contains fibers matching the foreign fiber found on the victim, e.g. a sweater or a car seat, the evidential value obviously depends on the rarity of the fiber type. But how does the evidential value of the matching item depend on the way this item was selected? Knowing that the probability of finding a matching item by chance increases with the number of items compared, does that mean that the evidential value of the matching item is less in the situation where many items were compared?

A second example is facial comparison, where the face of a suspect is compared to the face of a robber on a video tape. The conclusion depends on the similarities and differences observed, and their rarity. For example, if both the suspect and the robber have an odd-shaped scar on the cheek, this has a high evidential value. How should the facial comparison expert take account of the fact that the suspect is identified by the general public through showing the tape on the television? Is it essential for the expert to know how the suspect is selected in order to write his report, or should it not matter?

A third example is statistical evidence used in cases where someone is involved in a strikingly high number of incidents. This type of evidence was recently used in a criminal case against a nurse (Lucia de B.) in the Netherlands. The statisticians involved in this case disagreed on the "post hoc correction" for the fact that the nurse was picked out precisely because she was involved in so many incidents.

In short: does a forensic expert need to know how exactly the evidential material is selected? Should he correct the evidential value, which is usually expressed as (a verbal guestimate of) a frequency, for a selection effect? If so, how? 

<h1 align="center">The Evaluation of Fish Freshness</h1>
<p align="center">Paul Nesvadba and Dave Simmonds, Robert Gordon University (RGU)</p>

<h4>Fish Freshness:</h4>

<p>The evaluation of the quality criteria used to define
 the freshness of fish as well as the development of reliable methods 
for assessing this freshness has been the goal of fish research for many
 years. For a measuring tool to be usable day to day by workers in the 
fish industry it must be robust, simple to use, and provide objective 
measurements. Although significant progress has been made in developing 
such tools, more research is needed to verify their usefulness for 
measuring freshness across the broad range of fish types and fish 

<h4>Sensory Evaluation Methods</h4>

<p>These involve judging freshness by smell, colour, 
appearance, taste and texture of cooked flesh, etc. The validity of the 
standard EU scheme for the sensory assessment of fish freshness, 
introduced in 1976, has been questioned as it takes no account of the 
differences between species and uses only general parameters. The 
Concerted Action "Evaluation of Fish Freshness" (1997) concluded that </p>

<p>"<i>Sensory evaluation is the most important method 
today for freshness evaluation in the fish sector. The trend is to 
standardise sensory evaluation by improving methodologies and providing 
suitable training in their use to make sensory evaluation an objective 
measurement. The Quality Index Method (QIM) has been used by many 
research laboratories and is now being implemented in the fish industry.
 When compared with the EU scheme its main advantage is that it is 
specific for each species and the fluctuation between assessors is 

<p>The QIM takes a number of different sensory 
measurements for raw fish, scoring them each between 0 and 3, and adds 
them to obtain a quality index. Sensory evaluation is expensive since it
 involves training panels of people in the evaluation techniques and 
there is still a drive towards the development of simple, reliable, 
hand-held instruments for measuring the freshness of fish.</p>

<h4>Instrumental Evaluation Methods</h4>

<p>There are several instrumental methods for currently 
being researched. Each of these methods can provide different 
measurements for use in the computation of some kind of lumped index:</p>

<p><b><i>Microbial Methods</i></b> 
assess freshness by measuring the level of spoilage micro-organisms. At 
present reliable measurement may take 24 hours to obtain.</p>

<p><b><i>Volatile Compound Methods</i></b> use gas sensors (electronic noses). More work is needed on the characterisation of the properties of fish odours.</p>

<p><b><i>Muscle Protein Methods</i></b> measure post-mortem changes in the intermediate filaments and proteins. No rapid measuring systems are yet available.</p>

<p><b><i>Electrical Properties Methods</i></b>
 provide quick results and are successful in laboratory tests. A major 
drawback is the erroneous results produced by mechanical damage to the 
fish during handling.</p>

<p><b><i>Colour Measurement Methods</i></b> relate changes in fish freshness to the colour of the flesh and are still being studied and tested.</p>

<p><b><i>Time/Temperature Indicators (TTI)</i></b>
 monitor the fish temperature from the time it is caught through any 
refrigeration until the point of sale. The drawback is that to be 
reliable the indicator must follow the fish accurately from day zero to 
be at all reliable. </p>

<p><b><i>Mechanical Properties Methods</i></b>
 attempt to measure the physical texture of the fish using different 
kinds of instruments. Such methods have shown good correlation between 
texture measurement and sensory analysis for some species. The main 
problem with these methods is sample preparation due to the non-uniform 
structure of fish flesh and different orientation of structures.</p>
<p>									</p>

<h4>Texture Measurement by the Food Science and Technology Research centre at RGU.</h4>


<p>Mechanical devices for measuring the texture of fish 
usually involve some kind of cylindrical probe that presses into the 
fish with the force, <i>F</i>, increasing to a preset value. The corresponding depth, <i>h</i>, of the concavity produced can be measured and related to the elasticity of the flesh, as illustrated in Figure 1.</p>

<b><p>	<img src="http://www.maths-in-industry.org/past/ESGI/34/fish1.gif" height="386" width="619"></p>
</b><p><img src="http://www.maths-in-industry.org/past/ESGI/34/fish2.gif" height="437" width="617"></p>

<p>A drawback with the device of Figure 1 is that the specimen has to be mounted in a device so that the movement, <i>h</i>,
 can be properly measured against a reference point. It would be 
possible perhaps to add some external structure for measuring h as 
suggested Figure 1 but this would make the device cumbersome and awkward
 to use.</p>

<p>A variation of this idea, recently tested at RGU, involves a small open-ended cylinder (outer diameter <i>d<sub>O</sub></i><sub> </sub>, inner diameter <i>d<sub>I</sub></i><sub> </sub>). It was hoped that it would cause a "meniscus" of flesh of height <i>h</i>
 to swell up across the diameter of the probe as illustrated in Figure 
2. The advantage of this approach would be that a sensor to measure <i>h</i> could be built inside the probe as shown.</p>

<p>The aim would be to measure the height of this 
meniscus and relate it to the elasticity and hence texture of the fish 
flesh. It is clearly important for the reliability of the measurements 
obtained that the measuring process does not in any way damage the 
underlying structure of the flesh or cause it to behave non-elastically.
 For this reason the probe is applied to the surface of the fish with 
minimal force and constant speed.</p>

<p>The first version of the device (with <i>d<sub>O </sub>= 40mm</i><sub> </sub>and <i>d<sub>I</sub></i><sub> </sub><i>= 30mm</i>)
 was mounted on a Stevens texturometer and tested initially with various
 foam rubbers. The compression was achieved by moving the cylinder 
downwards at a steady speed of 30mm/min.Testing with fish and plotting 
the height of the meniscus against the force applied produced a 
non-linear relationship between <i>F</i> and <i>h</i> even though the 
flesh did not swell up into a meniscus as expected. Instead the fish 
surface inside the cylinder remained more or less unmoved as the circle 
of flesh around it was pressed inwards. The value of h, as measured by 
the displacement transducer was increasing linearly with time (constant 
speed of descent) indicating that the meniscus did not swell up. 
Although not what was this is could still prove useful as it provides 
the value of <i>h</i> without the need for a reference point and could hence be the basis for a hand-held device.</p>

<h4>Time Dependence</h4>

<p>When the probe is removed the fish material does not 
return instantly to where it was so cyclic application of the probe will
 produce different results. This is because fish is a visco-elastic 
material which means that its stress-strain behaviour is dependent on 
the rate of deformation. The time taken for the flesh to return to its 
initial state could be also related to its freshness and could perhaps 
be used to measure the latter. Time dependence may turn out to be the 
main problem to solve in which case lumped parameter models may be more 

<h4>The Questions</h4>

<p>(a) In the basic CPT (Figure 1): </p><dir>

<p>(i) Is it possible to predict the relationship between<i> F</i> and <i>h</i> ? </p><p>(ii) How is <i>F(h)</i> function related to the material elasticity? </p>
<p>(iii) What effect does the diameter of the probe have? </p></dir>

<p>(b) In the OCPT (Figure 2) :</p><dir>

<p>(i) Is it possible to predict the relationship between <i>F</i> and <i>h</i>? </p>
<p>(ii) How is <i>F(h)</i> related to the material elasticity ?</p>
<p>(iii) Is it possible to choose values for the outer diameter <i>d<sub>O</sub></i><sub> </sub>and the inner diameter <i>d<sub>I </sub></i>to get the most information about the material ? </p>

<p>(c) In both devices:</p><dir>

<p>(i) How far away from the probe does the sample feel the compression?</p><p>(ii) How thick does the sample have to be to avoid interference from the substrate or fish bone ? </p>
<p>(iii) What features should we be measuring to get a sensible value for the fish texture score?</p>
<p>(iv) How important is visco-elastic time-dependent behaviour ?</p>

<h2>Pictures of the apparatus</h2>

<h4>Fish Freshness:</h4>

Hand-held device as initially envisaged, shown testing 
a piece of foam rubber

<b><p>	<img src="http://www.maths-in-industry.org/past/ESGI/34/fish3.gif" height="386" width="619"></p>

Close-up of the cylindrical tube and the meniscus 
height sensor with the core withdrawn from the LVDT, lying on the 

</b><p><img src="http://www.maths-in-industry.org/past/ESGI/34/fish4.gif" height="437" width="617"></p>

Mrs Rosemary Hastings pictured with the LVDT device 
mounted on a Stevens Texturometer (to provide controlled rates of 
vertical movement and the compression stress and strain).

<p><img src="http://www.maths-in-industry.org/past/ESGI/34/fish5.gif" height="437" width="460"></p>

These is a richtext tiddler. A demo for RichTextPlugin.
 __''NB ''__: This is an alpha version of this plugin. Try it, improve it and share your job in the TW newsgroup or by publishing your own TW site. 
''How do I try it ?''
*Click "Edit" on this Tiddler
''Why is this tiddler behave as a richText ? '' 
*Because it's __richText__ tagged.
*Because richText is defined as the //richText tag// in AdvancedOptions
''How to install this pugin ?''
To install it, look at the procedure in the RichTextPlugin.

''This demo doesn't work for me ''''! Why ?''
*Currently, this plugin only works with fireFox. But :
**TW works with other browsers.
**TinyMCE (the richtext editor) works with IE.
**So, with some works, I suppose, you would be able to adapt RichTextPlugin in IE.
!Dependency modelling in credit risk
Banks as well as other companies need capital to withstand potential future losses. Banks assess the minimum size of their capital base using various methods, including internally developed models.

One common approach to estimating the capital needed to cover credit losses within a bank’s portfolio of loans is to simulate the future state of the portfolio. The outcome of the simulation is a loss distribution which enables the bank to calculate the recommended minimum capital base, taking into account the desired external rating1 and the risk appetite of the bank.

Figure 1 below shows a schematically drawn loss distribution. Expected loss, EL, is the average value of losses ‘observed’ in the simulations, and economic capital, EC, is the 99.97th percentile loss minus EL2. If a suitable spread is added to the interest rate of the loans, the revenue from the spread covers EL. EC, on the other hand, is the recommended minimum capital base and is supposed to cover any losses exceeding expected loss. With EC calculated at a 99.97% confidence level the bank should be able to cover its losses within the next year in 9,997 out of every 10,000 cases.

Figure 1 Loss distribution (schematic) loss

An accurate estimate of EC requires an accurate description of the tail of the loss distribution. One of the main contributors to this tail is the //correlation// between obligors, i.e. the extent to which obligors tend to default on their loans simultaneously. The more correlated the obligors, the more extended the tail, the higher becomes EC. Thus, modelling correlations (or //dependency modelling//) is paramount to credit risk modelling and seems to be one of the great challenges facing most banks.

Correlations are commonly modelled using a so-called //factor model//. That is, the financial health of obligor `i` depends on a set of //systematic risk drivers//, `X=(X_1,\ldots,X_M)`, and an //idiosyncratic term//, `\epsilon_i`:
$$\r_i = R_i[\sum_{j=1}^M \alpha_{ij}X_j]+\sqrt{1-R_i^2}\epsilon_i.$$

In the simulations, obligor i defaults if the value of ri falls below a certain threshold.

The chosen systematic drivers are typically equity indices. For instance, an ordinary Danish retail customer could be mapped to MSCI Denmark which would then reflect the macroeconomic environment surrounding the customer, whereas the idiosyncratic term would reflect the unique circumstances of this particular customer, i.e. things that are not related to the state of the economy.

In the simulations the `X_j`’s can be taken to be normally distributed with variances and correlations derived from recent market data. For instance, one could use 5 years of monthly observations to derive the variances of the equity indices and the correlations between them. The `\epsilon_i`’s are taken to be independent, standard normally distributed, independent of the `X_j`’s.
In the set-up described in the above, the correlation between obligors is determined by
# the correlations between the drivers,
# the sensitivity of each obligor to the drivers (`R_i` and the `\alpha_{ij}`’s).
As for item 1, the estimated correlation between the drivers and the correlation dynamics depend on the chosen data window (the longer the window, the more stable the parameters) and the frequency of observations (daily, monthly, etc.) When measuring capital on a one-year horizon, one wants the correlations to reflect the immediate future as well as possible.
We would like the working group to look into the following problems:
* The factor model described in the above may not be the optimal way of modelling dependency between obligors. Also, the intuitive link between equity indices and customer default is not as strong as the link between e.g. macro variables and default. Are there alternative ways, alternative data etc. which would provide a better and more intuitive description?
* If one uses the factor model described above, then what can be done to ensure correlation estimates which
# are sufficiently dynamic to capture changes in the market (e.g. the financial crisis) but still not too volatile?
# contain a forward-looking element as well?
* In the factor model described above, how can one determine the Ri’s and the ijα’s which provide the most accurate description of correlations? And how can the parameters be validated?
Given a suboptimal parameter space

Calibrations of robots are an important issue, if robots are to be used efficiently in manufacturing. The increasing use of off-line programming is enhancing this problem. Over the last decade many methods have been developed to obtain a precise kinematic model of a robot. These models are based on a set of parameters, that describes the position of framei relatively to framei-1.

The problem is to find a new set of parameters that gives the best fit, when only a subset of the parameters can be changed.

This problem arises when working with existing robot-controllers that have a simpler model of the robot. We still have to use the controller’s internal model when doing production like welding, where the robot are doing pending using its internal representation.

For further information about calibration methods, please write to marit@mip.sdu.dk for a copy of the article "The Calibration Index and Taxonomy for Robot Kinematic Calibration Methods".
The composition of a food substance can be inferred (in part) from its thermal properties. These can in turn be measured using a thermal probe. In a previous Study Group the effectiveness of using a single probe was considered. The current investigation will extend this to look at optimal placings of several probes so that repeated thermal measurements are made (with associated errors), the food composition can be accurately calculated and errors in the determined composition estimated.
TiddlyWiki FireFox TiddlyTools TiddlyTech HowTo $1
|Author|Eric Shulman|
|Description|selectively disable TiddlyWiki's automatic ~WikiWord linking behavior|
This plugin allows you to disable TiddlyWiki's automatic ~WikiWord linking behavior, so that WikiWords embedded in tiddler content will be rendered as regular text, instead of being automatically converted to tiddler links.  To create a tiddler link when automatic linking is disabled, you must enclose the link text within {{{[[...]]}}}.
You can block automatic WikiWord linking behavior for any specific tiddler by ''tagging it with<<tag excludeWikiWords>>'' (see configuration below) or, check a plugin option to disable automatic WikiWord links to non-existing tiddler titles, while still linking WikiWords that correspond to existing tiddlers titles or shadow tiddler titles.  You can also block specific selected WikiWords from being automatically linked by listing them in [[DisableWikiLinksList]] (see configuration below), separated by whitespace.  This tiddler is optional and, when present, causes the listed words to always be excluded, even if automatic linking of other WikiWords is being permitted.  

Note: WikiWords contained in default ''shadow'' tiddlers will be automatically linked unless you select an additional checkbox option lets you disable these automatic links as well, though this is not recommended, since it can make it more difficult to access some TiddlyWiki standard default content (such as AdvancedOptions or SideBarTabs)
<<option chkDisableWikiLinks>> Disable ALL automatic WikiWord tiddler links
<<option chkAllowLinksFromShadowTiddlers>> ... except for WikiWords //contained in// shadow tiddlers
<<option chkDisableNonExistingWikiLinks>> Disable automatic WikiWord links for non-existing tiddlers
Disable automatic WikiWord links for words listed in: <<option txtDisableWikiLinksList>>
Disable automatic WikiWord links for tiddlers tagged with: <<option txtDisableWikiLinksTag>>
2008.07.22 [1.6.0] hijack tiddler changed() method to filter disabled wiki words from internal links[] array (so they won't appear in the missing tiddlers list)
2007.06.09 [1.5.0] added configurable txtDisableWikiLinksTag (default value: "excludeWikiWords") to allows selective disabling of automatic WikiWord links for any tiddler tagged with that value.
2006.12.31 [1.4.0] in formatter, test for chkDisableNonExistingWikiLinks
2006.12.09 [1.3.0] in formatter, test for excluded wiki words specified in DisableWikiLinksList
2006.12.09 [1.2.2] fix logic in autoLinkWikiWords() (was allowing links TO shadow tiddlers, even when chkDisableWikiLinks is TRUE).  
2006.12.09 [1.2.1] revised logic for handling links in shadow content
2006.12.08 [1.2.0] added hijack of Tiddler.prototype.autoLinkWikiWords so regular (non-bracketed) WikiWords won't be added to the missing list
2006.05.24 [1.1.0] added option to NOT bypass automatic wikiword links when displaying default shadow content (default is to auto-link shadow content)
2006.02.05 [1.0.1] wrapped wikifier hijack in init function to eliminate globals and avoid FireFox crash bug when referencing globals
2005.12.09 [1.0.0] initial release
version.extensions.DisableWikiLinksPlugin= {major: 1, minor: 6, revision: 0, date: new Date(2008,7,22)};

if (config.options.chkDisableNonExistingWikiLinks==undefined) config.options.chkDisableNonExistingWikiLinks= false;
if (config.options.chkDisableWikiLinks==undefined) config.options.chkDisableWikiLinks=false;
if (config.options.txtDisableWikiLinksList==undefined) config.options.txtDisableWikiLinksList="DisableWikiLinksList";
if (config.options.chkAllowLinksFromShadowTiddlers==undefined) config.options.chkAllowLinksFromShadowTiddlers=true;
if (config.options.txtDisableWikiLinksTag==undefined) config.options.txtDisableWikiLinksTag="excludeWikiWords";

// find the formatter for wikiLink and replace handler with 'pass-thru' rendering
function initDisableWikiLinksFormatter() {
	for (var i=0; i<config.formatters.length && config.formatters[i].name!="wikiLink"; i++);
	config.formatters[i].handler=function(w) {
		// supress any leading "~" (if present)
		var skip=(w.matchText.substr(0,1)==config.textPrimitives.unWikiLink)?1:0;
		var title=w.matchText.substr(skip);
		var exists=store.tiddlerExists(title);
		var inShadow=w.tiddler && store.isShadowTiddler(w.tiddler.title);
		// check for excluded Tiddler
		if (w.tiddler && w.tiddler.isTagged(config.options.txtDisableWikiLinksTag))
			{ w.outputText(w.output,w.matchStart+skip,w.nextMatch); return; }
		// check for specific excluded wiki words
		var t=store.getTiddlerText(config.options.txtDisableWikiLinksList);
		if (t && t.length && t.indexOf(w.matchText)!=-1)
			{ w.outputText(w.output,w.matchStart+skip,w.nextMatch); return; }
		// if not disabling links from shadows (default setting)
		if (config.options.chkAllowLinksFromShadowTiddlers && inShadow)
			return this.coreHandler(w);
		// check for non-existing non-shadow tiddler
		if (config.options.chkDisableNonExistingWikiLinks && !exists)
			{ w.outputText(w.output,w.matchStart+skip,w.nextMatch); return; }
		// if not enabled, just do standard WikiWord link formatting
		if (!config.options.chkDisableWikiLinks)
			return this.coreHandler(w);
		// just return text without linking

Tiddler.prototype.coreAutoLinkWikiWords = Tiddler.prototype.autoLinkWikiWords;
Tiddler.prototype.autoLinkWikiWords = function()
	// if all automatic links are not disabled, just return results from core function
	if (!config.options.chkDisableWikiLinks)
		return this.coreAutoLinkWikiWords.apply(this,arguments);
	return false;

Tiddler.prototype.disableWikiLinks_changed = Tiddler.prototype.changed;
Tiddler.prototype.changed = function()
	// remove excluded wiki words from links array
	var t=store.getTiddlerText(config.options.txtDisableWikiLinksList,"").readBracketedList();
	if (t.length) for (var i=0; i<t.length; i++)
		if (this.links.contains(t[i]))
In various contexts, NATS need to ensure low probabilities of errors. A representative example is given below, but generically the error probabilities depend on the tails of some probability distributions for which there is no theoretical model, but considerable amounts of data. In these circumstances, questions that arise include:
# What are the best probability density funcions to fit?
# How sensitive are the results to the choice of pdf?
# Can results be statistically justified without underlying theoretical models?
# Are there other ways of arriving at conclusions without curve fitting a p.d.f. to the data?
# What results from extreme value theory or other areas might help?
Lots of real data will be supplied to the Study Group.

!Illustrative example
Part of NATS safety system requires that all of its radars at 22 sites have maximum safe ranges declared. NATS regularly undertakes an analysis of its radar performance to confirm or modify such ranges The maximum safe range for a radar depends on the separation required between aircraft. The declaration is typically of the form "radar X can support 5Nm separations between pairs of aircraft at any range up to 120Nm from the radar". It should be noted that radar performance is not constant. It can depend on many factors including: icing conditions at the radar and other weather factors; the presence of new structures such as wind-farms; and modifications to the radar and its associated equipment.

To determine maximum safe range the following process is followed:
# Several hours of data are recorded from many radars simultaneously.
# The recorded data is post-processed to determine the true position of targets and hence the individual position errors for each radar return and for each radar.
# The error data are then analysed statistically by another partly automated process to produce a maximum safe range estimate for a particular radar.

In this last stage, the distribution of radar bearing errors, x, is currently fitted by a sum of two symmetric exponentials,
$$ p(x) = \frac{1}{2} (p_1 \lambda_1 e^{-\lambda_1|x|} + p_2 \lambda_2 e^{-\lambda_2|x|}) \qquad (1) $$
where `p_i>0, p_1+p_2=1, \lambda_1>\lambda_2>0`, so that the central part of the density is dominated by the first term and the tails by the second. This is an empirical choice: we know no theory of the distribution. The fitted distribution is then used to estimate the Horizontal Overlap Probabilities (HOP), i.e. the probability that 2 aircraft that are really on the //same// bearing //appear// to be separated by more than a certain angle `\theta_0`:
$$ HOP = Pr(|X_1-X_2|\geq \theta_0), \qquad (2)$$
where `X_1` and `X_2` are independent samples from the distribution (1), so the question is answered using the tail probabilities of the convolution of (1) with itself.

The question then arises as to how well the observed data really characterize the tail of the distribution: do the computed HOP depend more-or-less directly on the data (and are fairly robust to what //form// of distribution is assumed); or is there quite a strong dependence on the form fitted. So this is a typical context in which the 5 questions listed earlier arise.
Other examples

Looking further afield than radar we find that all of our safety data comes in this form. Typical data is from
# radar azimuth or position errors (as above);
# altimetry system errors;
# gain/loss in longitudinal separations between pairs of aircraft under procedural control;
# crash location data around airports used for constructing individual risk contours.

We model the radar errors as mixed exponentials, altimetry system errors as mixed Gaussian or exponential plus other combinations.

There is no real model for gain/loss data and the crash location is Weibull (at least in part). Decisions on separation standards and maximum safe ranges need to err on the cautious side but not be unrealistically constraining. Such decisions usually depend on some sort of convolution of the fitted p.d.f.s and so sensitivity to the treatment of the error data is important.
Electrical power grids

Leading authority in energy consulting and testing & certification. A global, leading authority in energy consulting and testing & certification, active throughout the entire energy value-chain – in a world of increasing demand for energy, KEMA has a major role to play in ensuring the availability, reliability, sustainability and profitability of energy and related products and processes. KEMA combines unique expertise and facilities, in order to add value to our customers in the field of risk, performance and quality management. With more than 2,000 people, operating from 20 countries around the globe, we are committed to offering reliable, sustainable and practical solutions. We understand and recognize the technical consequences of a business decision, as well as the business consequences of a technical decision. Innovative technology has been our starting point for more than 80 years. That is our experience you can trust.

!Problem description
Electrical power grids are becoming increasingly complex. The customer used to be solely a power-consumer, whereas nowadays more and more customers are becoming power-producers, mainly because of the development of novel components for decentral power generation (solar panels, small wind turbines and heat pumps). And in the near future, decentral energy buffering is expected to become important due to the growth of the electric car market.

These developments pose many interesting questions to grid operators and electricity producers. To what extent is the current power infrastructure suited for the addition of this kind of energy-producing components? Or, at which locations should the infrastructure be reinforced to handle placements of additional components? What is the peak power that is produced by these components, as a function of time, day of the week, season, etc.? What kinds of correlations exist between the yields of multiple components of the same type, which are, for instance, installed at different geographical locations? For example, if the sun is shining in a particular street, then it is likely that the sun shines in all streets in the vicinity. And what about correlations in power production between different types of components, e.g. between solar panels and wind turbines, day versus night, seasonally?


KEMA addresses these types of questions, and advises grid operators and energy producers. Although the problem is clearly complex, for the SWI we will focus on the following question.

The transmission of power in each segment of an electrical power network can be determined through a load flow analysis according to Ohm's and Kirchhoff's laws. Solving these algebraic equations can be computationally involved. In particular, simulating many alternative configurations (due to proposed placements of additional decentral power-generating components in various locations of an existing power network, in order to assess the impact of such placements) is prohibitively complex.

Given an existing power grid, we would like to have a method that can quickly determine how many units of each type (solar panel, small wind turbine or heat pump) can be inserted into any transmission line in the network, such that under given distributions on the typical production and consumption, the maximum loads on the lines and components will not be exceeded, or if exceeded, to what degree and for what length of time this is likely to happen.

As input, we will provide the operating characteristics and statistics of the three types of components, the load-flow parameters of a model power grid of a neighborhood of a fictitious town, and typical usage data.

Depending on the progress during the week, the problem can be extended by incorporating more parameters into the analysis, adding optimization criteria, or determining necessary network reinforcement.
with Pfizer

Dry blending of powders is widely used in the manufacture of food formulations. When applied to the infant formula industry, typically a base powder is manufactured by traditional wet processing and spray drying techniques, into which selected powdered ingredients (e.g., carbohydrates, vitamins, minerals, flavours, heat-labile bioactive components) are dry blended. There is a wide array of dry blending technology available in the market place (e.g., ribbon blenders, paddle blenders, rotating IBC-type blenders). Depending on application and scale, this equipment is available in batch or continuous mode. The base powders and powdered components which are dry blended into base powder vary greatly in particle size with powder particle diameters ranging from 10 to 300 microns. The levels of incorporation of these components into base powder vary greatly depending on ingredient, product, label claims etc and can range from 0.1 to 20% of the final weight of product. Certain quality parameters of the finished product powder (e.g., bulk volume, particle size distribution and free fat) are also affected by blender type and blend time. In a typical operation the objective is to achieve a homogenous blend with the minimum possible blend time without negatively affecting finished product quality. The objective of this theoretical exercise is to understand if predictions can be made about choice of blending technology and blend time to achieve homogeneity without negatively impacting product quality based on key base powder and blending ingredient parameters (e.g., particle size and level of incorporation). 
Whilst ADSL supports error correction, these have an impact on the performance of the line. Using Dynamic Line Management data is collected from a customer’s line, analysed and changes made to move the customer to a more appropriate profile. The approach is typically iterative, so that a number of profile changes are made before the customer’s line is fully optimised.
The problem is how to most efficiently ensure that a ship, floating on the surface of the ocean, stays very nearly at rest with respect to the sea bed. One may employ thrusters, propellers and rudder action. More precisely, given external forces from wind, current and waves, and a specified position and motion envelope for the ship bulk, study the number, type, and placement of dynamic motion controls on the bulk with the goal of forming an optimal Dynamic Positioning System.
!Maths in Industry Newsletter (May 1995)
(Dr Tim Myers, OCIAM, Maths Institute, Oxford)

After a hectic month, including the BAMC and the 28th European Study Group with Industry I'm back with another newsletter. As mentioned in the previous issue the aim of this newsletter is to concentrate information useful to industrial mathematicians so please let me know if you have  anything useful to contribute. 
!European Study Group with Industry
The 28th ESGI was held at the Newton Institute in Cambridge. Academics were challenged by a variety of problems ranging from wear in mud pumps to issuing new company shares. There were also presentations on progress made at OCIAM on problems in the glass industry for Pilkingtons Research and a lively "discussion" on exponentially small asymptotics.
The various problems tackled during the week are described below.

!!British Gas
This problem concerned the flow of a gas jet that was directed downwards, through a crossflow, towards a planar surface; it is an idealised model for a gas leak from a ruptured pipe. One of the most noticeable features of the flow, from the experimental results provided by British Gas, is the presence of `horseshoe' vortices which curve around the front of the descending jet and travel downstream, entraining air. A flow dominated by these vortices, which originate from a point source where the jet strikes the wall, was proposed and from the conditions that the position of the vortices remain fixed and that the core radius of the line vortex remained small, a nonlinear differential equation was derived for the position of the horseshoe vortex system. The extent of the gas jet upstream was estimated using a simple conservation argument, with the assumption that the mass of air entrained was proportional to the difference between the velocity of the jet and the external flow. This argument provided estimates of the extent of the flow which were in reasonably good agreement with experiment. An independent two-dimensional analysis of the flow, using dimensional arguments, suggested a parabolic shape for the gas jet as it spreads out along the wall.

!!Domino Printing
The objective was to understand the mechanism by which a solid plug forms in the end of an ink jet nozzle and so help in formulating a strategy to prevent this. It had been noted that while such a plug forms rapidly in the confines of a small nozzle the surface of the same ink in a beaker only produced a crust near the container wall.

The principal cause for solidification was thought to be the reduction in solvent concentration due to evaporation, resulting in higher polymer concentration. Other related effects considered were (i) phase separation - similar three-phase (e.g. solvent, polymer, pigment) mixtures are known to exhibit this phenomena - together with nucleation of the solid phase at the ink/container boundary; (ii) higher concentrations of polymer will occur near a contact line if the contact angle is acute because of the difficulty of replenishing such regions with solvent; (iii) effective lack of mechanical strength of any crust on a large surface can allow it to break up; (iv) evaporation causes cooling, thermal contraction, and, for a large expanse of ink (with air above), e.g. in a beaker, natural convection and resultant mixing - thereby reducing the localised polymer concentration.

It was apparent that polymer build-up near the free surface of the ink would be reduced by mixing, for example using forced convection.

A second problem brought by Domino involved designing a valve by moulding an aperture in a sheet of rubber and closing it by compression. To avoid wear due to excessive stresses sharp edges were to be avoided in the aperture. Also, the valve should not buckle and would close in from the sides in order to prevent breakup of an ink droplet flowing through the valve.

Contact problem theory indicates that a thin line crack with the initial shape of an ellipse will close instantaneously along its length on the application of a large enough pressure. It would be preferable to zip the aperture in from the sides, hence an initial shape of superimposed ellipses was considered. A change in pressure causes the crack to close from the sides inwards in a continuous fashion as each ellipse snaps shut. This shape however would have cusps at the edges and hence would be difficult to mould. However by superimposing an outer ellipse on the obtained shape, a shape which is possible to mould is formed. On the application of a pressure this outer ellipse can be shut to obtain the cusped edge shape and then the valve can be operated between that pressure and the pressure needed for closure.

This problem concerned the process of calcining anthracite. In particular Elkem wished to understand the stability of the calcining process and the temperature variation caused by uneven flow at the bottom of the calciner.

A number of different aspects of this problem were considered. Analysis of the thermal boundary layer near the top electrode shed light on the heated wake and showed where the electrode preheats the anthracite encouraging the calcining process. A similarity solution modelled the reaction plume and a simple ODE system was developed allowing the stability to be considered.

The problem brought by MAFF was to devise a method to predict the freezing times for foods in an industrial freezer to within 10% accuracy whilst still being simple enough to use on a PC.

Many foods such as pizzas or hamburgers have one length scale sufficiently less than the others to allow them to be treated as one-dimensional. The Study Group derived conditions to be satisfied by the food and the freezer for this to be justified. The transfer of heat from the surface of the food was modelled by Newton cooling. Numerical experiments showed that the constant of proportionality for the cooling law is a function of the conditions in the freezer and does not vary significantly for different foods.

The 1-dimensional model can simulate the layered structure of foods by allowing the physical parameters (thermal conductivity, specific heat and density) to vary with the spatial coordinate. It is believed that modelling the distinct layers will be significantly more accurate than the current practice of simply averaging the physical parameters and simulating a homogeneous food.

An issue raised during the OCIAM presentation on problems in glass flow concerned bubbles bursting at a free surface. Pilkingtons find that most bubbles produced in molten glass will float to the surface and quickly burst through, yet a small number of bubbles can sit underneath the surface for up to an hour.

Gravity acts to drain the film separating the bubble from the air. The mechanism proposed for stabilising this film is surface tension variation which acts to pull fluid in the opposite direction to gravity. Estimates for the time scales showed that surface tension effects can increase the draining time from a few minutes to a few hours.

Riskcare wished to price American-style warrants. These are call options which can be exercised at any time to create new shares. Of particular interest was the situation where the number of shares and warrants issued is a significant proportion of the original number of shares. This has been observed to lead to "dilution effects", because the company value does not increase whilst the number of shares does.

First the pricing of American warrants in a classical setting was considered. This showed that given optimal exercise of the warrants no dilution of the share price should occur. The dilution effect observed in practice must therefore be caused by other factors. One possibility is that the price drop is due to large sales of the shares obtained on exercise of the warrants. A pricing equation was also developed for the warrants which would require numerical solution.

Schlumberger Cambridge Research presented the problem of redesigning the valve in a positive displacement pump in order to reduce the wear on its urethane seal. The erosion, or "pitting", of the urethane is due to the presence of small proppant particles which constitute about 35 % by volume of the slurry that is being pumped. Consideration of the dimensionless parameters in the problem lead to the conclusion that it is not possible to redesign the valve in such a way that particles are expelled preferentially over the fluid; they are certain to be crapped and trussed as the valve closes (A.D. Fitt, (not very) private communication). However, the problem of pitting could be reduced by redesigning the shape of the seal in order to minimise the stresses imposed on the urethane by trapped particles; several alternative designs were suggested.

UBS were interested in calculating risk/reward profiles for a portfolio problem. Two aspects were discussed: firstly the location of the optimum reward portfolio (or portfolios) for a given level of risk, secondly some issues to do with three-dimensional graphical representations of the risk-reward diagram as sample portfolio parameters varied.

It was a thoroughly enjoyable and well organised week. Thanks to all those people in Cambridge who took the trouble to arrange the meeting and make it a success. The next study group will be held in Oxford and supported by the Smith Institute.

The 29th ESGI was held this year at the Mathematical Institute of Oxford University. The participants were treated to an unparallelled choice of problems, ranging from modelling radioactive sludge to the politically shaky ground of determining how government changes to the housing bill will affect homeless people. At the same time delegates were constantly hounded by the press, who had one thing on their mind, to get the scoop on watching paint dry (Daily Mirror 27/2/96, THES 15/3/96, New Scientist 17/2/96, Radio 4 - Science Now, Radio Oxford, Tomorrows World and even Central TV who turned out to be filming the wrong conference). The Study Group also leapt to the forefront of technology with the introduction of teleconferencing, provided by BT.

During the week a series of talks was given by OCIAM members. On Tuesday afternoon Kevin Parrott gave a lecture entitled "Particles, packages and parallelism". John Ockendon gave a lecture on Wednesday, "Mathematics-in-industry into the millennium", and on Thursday Robert Leese talked about "Applications of discrete methods". On Monday evening the Tryfan Rogers Memorial Reception, held in the Reading Room of Somerville College, was very well attended.

The various problems tackled during the week are described below.

In crude terms, the problem is to determine the efficiency of a mobile phone antenna when it is placed in 1sq km of typical urban environment. More precisely, information was sought about the number N of rays that need to be considered at any particular point if the energy in the uncounted rays is at most 10% of the total. Given a reflection coefficient of 1/2 on each ray bounce, and neglecting diffraction altogether, a study of some prototypical configurations suggested that N^(3N/2) needed to be less than some prescribed constant. It was also pointed out that in any street, say in a housing estate, there was probably only one dominant ray and that the identification of this ray could give much valuable information. Also several other representations of the solution of Helmholtz equation were proposed, ranging from "hybrid" schemes (which account for low frequencies in terms of discrete nodes and high frequencies in terms of rays) and "diffusion" approximations akin to those used in radiative transfer in optically thick media. Thus the problem suggested several new areas of applied mathematics as well as opening everyones eyes to the wonders of video conferencing.

!Courtaulds Coatings
The problems brought to the Study Group by Courtaulds Coatings concerned the electrostatic deposition of powder paints onto an earthed metal workpiece. The main questions considered were:
What factors affect deposition efficiency?
How is deposition efficiency maximised?

In general two types of "gun" are used in the painting process; the corona gun which charges particles by ionising the air in the vicinity of the gun nozzle and also the tribo gun which charges particles directly using friction inside the gun. As the corona gun involves many more complicated physical processes, only the tribo gun, with a single species of particle, was investigated at the meeting.

Non-dimensionalisation of the governing equations showed that electric and aerodynamic forces were in balance. The particular case of a narrow `jet' of particles impinging on an earthed workpiece was then analysed, showing that geometry is by far the most important factor affecting efficiency.

!Pacific Northwest National Laboratories
Hydrogen gas is being produced by radioactive decay in large (10^3 m^3) nuclear waste storage tanks at the rate of 4 m^3/day. The waste/sludge has a yield stress of 10^3 Pa. The study group was asked to consider the mode in which the gas is released, whether as dangerous large eruptions or as acceptable small bubbles: there is danger if the concentration of gas in the roof of the tank exceeds 4%.

The study group decided that bubbles larger than 10cm would overcome the yield stress and move. To examine how the bubbles might grow, an inflating crack model was constructed. Ideas of percolation through a network of cracks were abandoned when the originator of Percolation Theory told us that nothing was known (by which he meant proven), and that it would take 6 months to produce a computer estimate (probably more accurately than justified by the model). Looking at the problem as a very slow flow through a porous media, one rapidly concludes that the pores are nearly all sealed.

The main conclusions of the study group were that the gas should be released steadily, unless there is a seal or crust which traps the mobile 10 cm bubbles. Less happily, quite a large mass of gas is expected to be stored at the large hydrostatic (sludge-static?) pressure at the bottom of the tanks, so that it would be most unwise to stir the tanks.

!Nuclear Electric
The frequency of the national electricity grid is affected by fluctuations in supply and demand, and so continually "judders", in an essentially unpredictable fashion, around 50 Hz. At present such perturbations do not affect Nuclear Electric as their plant is run at more or less constant load, but they would like to be able to offer the national grid a mode of operation in which they "followed" the grid frequency: i.e., as the frequency rose above or fell below 50 Hz, the plant's output would be adjusted so as to tend to restore the frequency to 50 Hz. Such a mode of operation, however, would cause a certain amount of damage to plant components (e.g. the reactor fuel cladding) owing to the consequent continual changes in temperature and pressure. The problem was to devise a method which, given grid frequency data, could estimate this damage in reasonable computational time.

Two main approaches were considered: statistical prediction and analytical modelling via a low-order differential system. With the former approach it was difficult to reach concrete conclusions in the small amount of time available, but using "typical" data supplied by Nuclear Electric it was possible to verify that a phase space predictor would in theory provide a feasible nonlinear statistical model. The analytical approach reached some very promising conclusions: a linear third-order system (apparently valid for all but the most rapid of grid changes) was obtained, and comparisons between the theoretical gain predicted by this model and the gain calculated from real data showed a remarkable degree of agreement. A good model of the behaviour of the grid was also derived based on Brownian motion.

The aim was to see how priorities employed by local authorities in allocating housing affected the numbers of households on registers for council housing. The study was motivated by prospective legislation of the Government which could mean councils giving less priority to homeless families in temporary accommodation. A system of differential equations was used to model the flow rates between different classes of population (e.g. the homeless, families content with private-sector accommodation, families in council property but on the register for more suitable homes). The rates between any categories were taken to be proportional to the population from which the flow originated, except that rates of rehousing were assumed to be jointly proportional to the number of spare council houses. Using the figures supplied (for a "typical borough") it was found that most category sizes were insensitive to the constant of proportionality controlling the rehousing of the homeless. The exception was the number of people in temporary accomodation which increased substantially as this constant was reduced towards those for the other rehousing rates (as the system was made "fairer").

!Du Pont
DuPont wanted to understand the mechanisms for the formation and evolution of defects in wet screen printed layers. Their primary objective was to know how best to alter the properties of the paste (rather than the geometry of the screen printing process itself) in order to eliminate the defects. With these goals in mind the work done during the Study Group was as follows:
* a simple model for the closure of craters,
* a model for the partial closure of large, intentionally-created gaps in the paste layer,
* a possible mechanism for the formation of pinholes which crucially involved the "locking together" of the relatively-large solid particles present in the paste
* a more detailed consideration screen printing process, particularly attempting to understand the flow of the paste through the mesh.

Greycon are interested in pattern reduction in the one-dimensional stock cutting problem. They are concerned with cutting "jumbo" reels of paper into narrower "customer" reels, the quantities and widths of which are specified by the clients. They are satisfied with their current waste-minimisation program but given a minimum-waste solution they want to know if they can satisfy customer demand with the same minimum waste but with less settings of the paper slitting knives, or better still, the minimum number of settings of the knives.

We managed to show that there is no easy way to find the minimum number of knife settings for a given minimum-waste solution as the problem is NP-hard. However, we managed to construct an algorithm which was capable of detecting some possible reductions in the number of knife settings which the existing Greycon algorithm would not be able to detect. The algorithm was coded up and tested and managed to improve on Greycon's current algorithm by reducing the number of knife settings in some of the test problems that Greycon provided.

!Perkins Technology
The first problem brought by Perkins was to calculate the temperature of diesel fuel as it is injected into the cylinder of a diesel engine. Fuel at a temperature of about 300 K arrives via a high-pressure line into a storage area in the injector; it has a residence time there of about 40 ms, before being injected into the cylinder in 1.5 ms. The quantity of fuel in one injection is about 85 cubic mm. The temperature inside the cylinder varies during the combustion cycle, but the temperature of the fuel is very hard to measure. Thermocouple readings in the wall of the injector give a temperature of about 550 K.

The heat transfer through the injector into the fuel was investigated. The major effects were conduction into the annular fuel storage region during the quiescent period and convection in the nozzle region during the injection period. Adiabatic cooling might be expected to cool the fuel slightly. An estimate of 10 K for the rise in temperature of the fuel was arrived at. More accurate temperature measurements in the body of the injector would help to make the figure more precise.

The second problem brought by Perkins Technology was to determine the feasibility of using a cyclone separator to remove small soot particles from exhaust emissions. By calculating the drift velocity of the particles relative to the air caused by the centrifugal force, we concluded that particles must have a minimum diameter of around one micron to be spun out by this kind of device. This is 1 to 2 orders of magnitude larger than the particles found in the exhaust.

We also investigated the mechanism for particle growth in the exhaust system. The dominant mechanism was found to be particle aggregation from Brownian motion. This predicts that the particles should grow to around 0.1 micron in diameter at the end of the exhaust system, in good agreement with experimental measurements of particle size.
Full reports will be sent out to all "model" participants, further copies are available on request. Finally, the organisers would like to thank all the participants and industrialists who came and made the week such an unqualified success. We are also grateful to the EPSRC and the LMS for their financial support and The Smith Institute for their assistance in organising the meeting.
# [[Optimal blower design]] AKZO
# [[Cracking and buckling in ship bulkheads]] WS Atkins
# [[Risk estimates (obtained from highly skewed distributions)]] AEA Technology
# [[Electrode paste briquette softening]] ELKEM
# [[Homogenisation of thermal and electrical properties of the Søderberg electrode]] ELKEM
# [[Determining thermal properties of food]] Food Sciences
# [[Micro-waving food]] Food Sciences
# [[Mudcracking in drying paint]] ICI
# [[Thermo-electrical stability in an electrode]] Elkem
# [[Diffusion of titanium into sapphire in the fabrication of miniature lasers]] Opto-electronics Research Centre
# [[Visco-elastic behaviour of glass]] Pilkington
# [[Flow of swelling clays in narrow cracks]] Quantisci
# [[Linear friction welding]] Rolls Royce
# [[Efficient polynomial approximation of television images]] Snell and Wilcox
# [[Simulation of free surface flow and heat exchange in a partly filled reactor]] Unilever
# [[Feature recognition in 3D-scanning]] SCAN technology
# [[Dynamic Positioning System]] Danish Maritime Institute
# [[Scroll optimization]] Danfoss
# [[How to build with LEGO]] LEGO
# [[Mixing of chlorine in swimming-pools]] Grundfos
# [[Temperature and moisture gradients in sugar silos]] DANISCO
# AKZO Nobel: Instability in fibre drawing
# Eldim: Laser drilling
# KPN Research: Default logic in telephone services
# KPN Research: ADSL modems
# Nederlands Meetinstituut: The t-factor for a non Gaussian distribution
# Dr Daniel den Hoed Kliniek: Pattern recognition in CT scans
# TRESPA: Pressing of a corner profile 
# [[Deformation of the Surface of Fish]] Food Science and Technology Research Centre
# [[Mixing in the Downward Displacement in a Turbulent Wash]] Schlumberger Dowell, Clamart
# [[Optimum Deployment of Telephone Engineers]] BT
# [[Sunroof Boom]] Jaguar Research
# [[Modeling thermostatic radiator valves]] Danfoss
# [[Mathematical modeling of paint flow from a spray gun]] Odense Steel Shipyard
# [[Determining parameters for a robot]] Amrose A/S

# [[Modelling the flow and temperature distribution in fan-chilled rooms]] FOOD REFRIGERATION AND PROCESS ENGINEERING RESEARCH GROUP
# [[Shock-free supersonic flight]] Scott Rethorst, VEHICLE RESEARCH CORPORATION
# [[Analysis of a vibrating needle curemeter]] RAPRA TECHNOLOGY LTD (formerly RUBBER AND PLASTICS RESEARCH ASSOCIATION)
# [[Aircraft departure sequencing]] NATIONAL AIR TRAFFIC SERVICES LTD
# [[Noise from water leaks in pipes]] MECON, CAMBRIDGE

# [[Cooling overheated fish]] Artis Aquarium
# [[Better compression of audio-signals]] Philips
# [[Component placement op chips]] Magma Design Automation.
# [[Reconstruction of sea-surface temperatures using fossil marine plankton]] NIOZ
# [[Parameters to grow roses]] Phytocare
# Diffusion of euro coins over Europe
# [[Problem from Acordis Acrylic Fibres, Grimsby]]
# [[Speech recognition]] Jomega, Austrey
# [[Problem from Nan Gall, Aberdeen]]
# [[Problem from Numbercraft, Oxford]]
# [[Predicting the Impact Point of a Falling Body, Subject to Drag, in Real-Time Simulation]] QinetiQ, Bedford
# Problem from QinetiQ, Winfrith
# [[Flowable Concentrated Suspensions]] Unilever, Colworth
# [[Small fast inkdrop emission from a nozzle]] Xaar, Cambridge
# [[Scanning with electrostatic fields]] Amfitech
# [[Electromagnetic Energy Flow in Photonic Crystals]] NKT Research
# [[Mass flow measurements by momentum changes]] Danfoss
# [[Performance Forecast of a Flight Schedule]] KLM
# [[Problems surrounding the expanding Pacific oyster in the Eastern Scheldt]] Rivo
# [[Probability model for marks and prints]] Nederlands Forensisch Instituut
# [[How to hang bells and wire construction of a carillon in a tower]] Het Nationaal Beiaardmuseum
# [[The behaviour of a droplet of polymer solution in an ink-jet printer]] Philips
# [[Optimal network design for the ULTra transport system]] Advanced Transport Systems Ltd
# [[Incubation of penguin eggs]] Bristol Zoological Gardens
# [[GPRS session time distribution]] Motorola
# [[Perspiration modelling of the human foot]] SATRA technology centre
# Spinox
# [[Challenges for mathematical modelling in technological plasmas]] Trikon technologies
# [[Escape of air from food foams during pressure release]] Unilever
# [[Teetered tail rotor dynamics]] Westland Helicopters
# Stall Prediction Model [[PDF|p/esgi/47/grundfos1.pdf]]
# Model to Check Distance to Catalog Curve [[PDF|p/esgi/47/grundfos2.pdf]]
# [[Mathematical Analysis of the Dynamic Flow Characteristic in a Damping Nozzle for a Pressure Transmitter]]
# [[Trigger Algorithm for Ultrasonic Flow Metering]]
# Determination of Distance from a 2D Picture [[PDF|p/esgi/47/unisensor.pdf]]
# [[Leakage Detection Method]] Filtrix and X-Flow
# [[Is there a financial life after an error? (caused by accounting)]] Algemene Rekenkamer -- Dutch Govermental Financial Control Body
# [[The rotor spinning process for fibre production]] Teijin Twaron
# [[Statistical disclosure control (PDF)|p/esgi/48/CBS_Eng.pdf]] CBS -- Organisation Statistics Netherlands
# [[Environmental effects of the traffic]] Demis BV
# [[ADR-Option-Trading at AOT]] AOT trading company
# [[Stability of the Old Church in Delft]]

!!Brainstorm session: Improving public awareness of science by means of participative internet projects
In Holland and Belgium a so called 'Big Flu Measurement' takes place from November 1, 2003 to April 1, 2004. This participative internet project is meant for primary education, high schools and the general audience. It is highly succesfull. At present it is being investigated if it can be repeated during the next flu season and also wether it could be launched in other European countries.

The Big Flu Measurement derives its success to a large part from enthousiastic cooperation by Dutch mathematicians. A foundation was created to initiate more of these internet projects. For 2004 a project to study trafic safety is being studied. In June 2004 a Venus Transit takes place. In cooperation with the European Southern Observatory a public measurement will performed to derive a value for the Astronomical Unit (distance of the Earth to the Sun). A third internet project involves the grow season of children. Again mathematicians are welcomed to contribute to these projects. They are a rare opportunity to popularise mathematics for all age groups and all levels of understanding!
# [[Uncertainty in flow in porous media]] Schlumberger
# [[The design of robust networks for massive parallel micro-fluidic devices]] Unilever
# [[Models of Consumer Behaviour]] Unilever
# [[Modelling of melt on spinning wheels and the impact of scale up on the various parameters]] Thermal Ceramics UK
# [[Real time traffic monitoring using mobile phone data]] Vodafone Pilotentwicklung GmbH
# [[Optical Measurement of Glucose Content of the Aqueous Humor]] Lein Applied Diagnostics
# [[Distribution-independent safety analysis]] National Air Traffic Services
# [[Data Packet Loss in a Queue with Limited Buffer Space]] Motorola Research
# [[Tukkikuorman stabiliteetti]] Timberjack Oy (description in finnish)
# [[Paperin kuituverkon muodostuminen]] Oy Keskuslaboratorio - Centrallaboratorium Ab (description in finnish)
# [[Adaptive polling technology]] Hotelzon International Ltd
# [[Log sorting model]] Maailmankylä Oy
# [[Roudan sulamisen ennustemalli]] Roadscanners Ltd, (description in finnish) 
# [[3D building application for children (PDF)|p/esgi/51/LEGO.pdf]] LEGO
# [[Plotting performance map (PDF)|p/esgi/51/HV-Turbo.pdf]] HV-Turbo A/S
# [[Virtual current density in magnetic flow meter (PDF)|p/esgi/51/Siemens.pdf]] Siemens Flow Instruments
# [[Weighing schemes]] NMi
# [[Planning drinking water with a handicap]] KLM (Aqua Services)
# [[Dealing with selection effects in forensic science]] NFI
# [[Partitioning a Call Graph]] SIG
# [[Spatiotemporal patterns in high-density surface electromyography]] IFKB
# [[Warming up bodies after invasive surgery]] AMC 

# [[Mathematical modeling of temperature sensors containing a porous material (PDF)|p/esgi/54/danfoss.pdf]] Danfoss
# [[The effect of temperature gradients on ultrasonic flow measurement (DOC)|p/esgi/54/siemens.doc]] Siemens Flow Instruments
# [[Recognition of small circular objects in noisy images (DOC)|p/esgi/54/unisensor.doc]] Unisensor
# [[Smart calibration of excavator sensors (PDF)|p/esgi/54/mikrofyn.pdf]] Mikrofyn
# [[The Bearing Capacity of Highways (PDF)|p/esgi/54/greenwood.pdf]] Greenwood Engineering
2009, Aug 17-21: [[Southern Denmark (Denmark)|http://www.esgi.dk/]]

# Danske Bank: [[Dependency modelling in credit risk|Dependency modelling in credit risk]]
# Dong Energy: [[Gas portfolio optimisation under uncertainty|Gas portfolio optimisation under uncertainty]]
# Unisensor: [[Reconstruction of 3D morphology from optical sectioning of biological objects|Reconstruction of 3D morphology from optical sectioning of biological objects]]

# [[e-Anchors - design an optimization algorithm to keep a ship at sea fixed in place, using GPS data and thrusters]] MARIN
# [[DoItYourself Power Generation - how many solar panels can you and your neighbors install before you cause a power outage?]] KEMA
# [[Chicken Flow - model the flow of poultry through a meat processing unit]] Stork Food Systems
# [[Algae Purification - optimize algae growth for the removal of fertilizer from greenhouse run-off]] Phytocare
# [[GPS in the Shopping Mall - compute the positions of a mobile network, given absolute positions of a few]] ESA

# [[Structural Models for Wind Turbines]] Teknova and IRIS
# [[Emitter-Platform Association]] SELEX Galileo Ltd.
# [[Modelling Hurricane Track Memory]] LLoyd's of London
# [[Earthquake Risk: Including Uncertainties in the Ground Motion Calculation]] AIR Worldwide
# [[A Neutrally Stable Virtually Pivoting Chair]] 61-54 Design
# [[Dynamic Line Management]] TalkTalk
# [[Fractal Properties of Soil]] Syngenta

# [[Lotsizing and sheduling in BA Vidro]] BA Vidro
# [[Bluepharma]]
# [[How far can we go in aluminum extrusion?]] Extruverde
# [[Food distribution by a food bank among local social solidarity institutions]] Food Bank of Lisbon
# [[Evaluation of taxi services provision on airport terminals curbside for picking up passengers]] Globalvia
# [[Checkout area design]] Sonae

# [[Spin-coated substrates for cell growth]]
# [[Model based methodology development for energy recovery in flash heat exchange systems]]
# [[Optimising voice quality in conference calls]]
# [[Monomer Flow in Contact Lens Manufacture]]
# [[Coating a complex lattice]]
# [[Dry Blending of Powder - Impact of Particle Size, Blender Type and Blend Time on Homogeneity and Product Quality]]
# [[Optimising chicken production]]
# [[Structured Products]]

2010, Aug 16-20: [[Lyngby (Denmark)|http://www.esgi.dk/76/]]

# [[Peristaltic Pump|Peristaltic Pump]] Danfoss
# [[GAS portfolio optimisation under uncertainty|GAS portfolio optimisation under uncertainty]] DONG energy
# [[Siphoning from ground water wells|Siphoning from ground water wells]] DHI
# [[Oil Well Magnetmeters|Oil Well Magnetmeters]] WellTec
# [[Optimization of investment proposals in a Venture Capital investment simulator VCR]] CambridgePYTHON
# [[Mathematical patients privacy protection techniques in medical databases]] Centre of Health Information Systems (CSIOZ)
# [[Models and measures to evaluate the effectiveness of funds utilization for scientific research and advanced technologies development]] The Information Processing Centre (OPI)
# [[Comparable aggregated indicators of QoS in telecoms market]] The Office of Electronic Communications (UKE)
# [[Cryptographic techniques used to provide integrity of digital content under long-term storage]] Polish Security Printing Works (PWPW)

# [[Blood pressure measurement]] Sabirmedical
# [[Prevention of flooding of the river Ebro]] Sistemas Avanzados de Control
# [[Bandwidth consumption and invoicing models]] Cisco Systems
# [[Efficient Silicon Melting]] Elkem / Teknova
# [[Future timetables: Scheduling of a future air transport system]] Airbus
# [[Hardware-constrained matching algorithms]] Thales
# [[Analytical solutions for compartmental models of contaminant transport in enclosed spaces]] DSTL
# [[Modelling the Effect of Friction on Explosives]] AWE
# [[Interpreting Pharmaceutical Screening Test Results]] Pfizer
# [[Effect of Distributed Energy Storage Systems on the Electricity Grid]] Ecotricity
# [[Multiplier effect of the Engineering & Tooling sector in Portugal]] Iberomoldes
# [[Innovation effect on the Engineering & Tooling sector]] Iberomoldes
# [[Aircraft Components Maintenance Shop Production Planning: Random events prioritization]] TAP Maintenance and Engineering
# [[Balanced Scorecard, objectives and its relationships]] Critical Software 
# [[Electricity prices and demand side management]] Crystal Energy
# [[Roll Coating Technology]] DSM
# [[Optimisation in the Beef Industry]] Rangeland Foods
# [[Haptic Mobile Phone Touchscreens]] Analog Devices
# [[Efficient usage of O-negative blood]] Mid-Western Regional Hospital
# [[Foam Formation in the Plastics Industry]] Cork Plastics
# [[Predicting Bus Arrival Times]] Dublin City Council
# [[Automated Crack Detection in Roads using the Surface Imaging System]] Greenwood Engineering
# [[Optimizing utilization of silo capacities]] Solae Denmark project
At present uncertainties in GMPEs (Ground Motion Predictive Equations) are not included in any commercial model with worldwide business. Therefore, finding a mathematical way to incorporate the distribution around the mean groundmotions without increasing the already demanding computational effort is the scope of this proposal. A suitable solution would definitively represent a milestone in earthquake catastrophe modelling.
body {color:#444; font-size:0.75em;line-height:1.4em;font-family:arial,helvetica;margin : 0.5em; padding : 0;}
html {border:0}
a, a:link, a:visited, a:active {text-decoration:none;color:#BB4400;font-weight:bold}
ul, ol {margin-left:0.5em;padding-left:1.5em;}
|''Description:''|Lite and extensible Wysiwyg editor for TiddlyWiki.|
|''Date:''|Dec 21,2007|
|''Author:''|Pascal Collin|
|''License:''|[[BSD open source license|License]]|
|''Browser:''|Firefox 2.0; InternetExplorer 6.0|
*On the plugin [[homepage|http://visualtw.ouvaton.org/VisualTW.html]], see [[WysiwygDemo]] and use the {{{write}}} button.
#import the plugin,
#save and reload,
#use the <<toolbar easyEdit>> button in the tiddler's toolbar (in default ViewTemplate) or add {{{easyEdit}}} command in your own toolbar.
! Useful Addons
*[[HTMLFormattingPlugin|http://www.tiddlytools.com/#HTMLFormattingPlugin]] to embed wiki syntax in html tiddlers.<<br>>//__Tips__ : When this plugin is installed, you can use anchor syntax to link tiddlers in wysiwyg mode (example : #example). Anchors are converted back and from wiki syntax when editing.//
*[[TaggedTemplateTweak|http://www.TiddlyTools.com/#TaggedTemplateTweak]] to use alternative ViewTemplate/EditTemplate for tiddler's tagged with specific tag values.
|Buttons in the toolbar (empty = all).<<br>>//Example : bold,underline,separator,forecolor//<<br>>The buttons will appear in this order.| <<option txtEasyEditorButtons>>|
|EasyEditor default height | <<option txtEasyEditorHeight>>|
|Stylesheet applied to the edited richtext |[[EasyEditDocStyleSheet]]|
|Template called by the {{{write}}} button |[[EasyEditTemplate]]|
!How to extend EasyEditor
*To add your own buttons, add some code like the following in a systemConfig tagged tiddler (//use the prompt attribute only if there is a parameter//) :
**{{{EditorToolbar.buttons.heading = {label:"H", toolTip : "Set heading level", prompt: "Enter heading level"};}}} 
**{{{EditorToolbar.buttonsList +=",heading";}}}
*To get the list of all possible commands, see the documentation of the [[Gecko built-in rich text editor|http://developer.mozilla.org/en/docs/Midas]] or the [[IE command identifiers|http://msdn2.microsoft.com/en-us/library/ms533049.aspx]].
*To go further in customization, see [[Link button|EasyEditPlugin-LinkButton]] as an example.


var geckoEditor={};
var IEeditor={};

config.options.txtEasyEditorHeight = config.options.txtEasyEditorHeight ? config.options.txtEasyEditorHeight : "500px";
config.options.txtEasyEditorButtons = config.options.txtEasyEditorButtons ? config.options.txtEasyEditorButtons : "";

// TW2.1.x compatibility
config.browser.isGecko = config.browser.isGecko ? config.browser.isGecko : (config.userAgent.indexOf("gecko") != -1); 
config.macros.annotations = config.macros.annotations ? config.macros.annotations : {handler : function() {}}


config.macros.easyEdit = {
	handler : function(place,macroName,params,wikifier,paramString,tiddler) {
		var field = params[0];
		var height = params[1] ? params[1] : config.options.txtEasyEditorHeight;
		var editor = field ? new easyEditor(tiddler,field,place,height) : null;
	gather: function(element){
		var iframes = element.getElementsByTagName("iframe");
		if (iframes.length!=1) return null
		var text = "<html>"+iframes[0].contentWindow.document.body.innerHTML+"</html>";
		text = config.browser.isGecko ? geckoEditor.postProcessor(text) : (config.browser.isIE ? IEeditor.postProcessor(text) : text);
		return text;


function easyEditor(tiddler,field,place,height) {
	this.tiddler = tiddler;
	this.field = field;
	this.browser = config.browser.isGecko ? geckoEditor : (config.browser.isIE ? IEeditor : null);
	this.wrapper = createTiddlyElement(place,"div",null,"easyEditor");
	this.iframe = createTiddlyElement(null,"iframe");

easyEditor.prototype.onload = function(){
	this.editor = this.iframe.contentWindow;
	this.doc = this.editor.document;
	if (!this.browser.isDocReady(this.doc)) return null;
	if (!this.tiddler.isReadOnly() && this.doc.designMode.toLowerCase()!="on") {
		this.doc.designMode = "on";
		if (this.browser.reloadOnDesignMode) return false;	// IE fire readystatechange after designMode change
	var internalCSS = store.getTiddlerText("EasyEditDocStyleSheet");

	var barElement=createTiddlyElement(null,"div",null,"easyEditorToolBar");
	this.toolbar = new EditorToolbar(this.doc,barElement,this.editor);


easyEditor.SimplePreProcessoror = function(text) {
	var re = /^<html>(.*)<\/html>$/m;
	var htmlValue = re.exec(text);
	var value = (htmlValue && (htmlValue.length>0)) ? htmlValue[1] : text;
	return value;

easyEditor.prototype.scheduleButtonsRefresh=function() { //doesn't refresh buttons state when rough typing
	if (this.nextUpdate) window.clearTimeout(this.nextUpdate);
	this.nextUpdate = window.setTimeout(contextualCallback(this.toolbar,EditorToolbar.onUpdateButton),easyEditor.buttonDelay);

easyEditor.buttonDelay = 200;


function EditorToolbar(target,parent,window){
	this.target = target;
	var row = createTiddlyElement(createTiddlyElement(createTiddlyElement(parent,"table"),"tbody"),"tr");
	var buttons = (config.options.txtEasyEditorButtons ? config.options.txtEasyEditorButtons : EditorToolbar.buttonsList).split(",");
	for(var cpt = 0; cpt < buttons.length; cpt++){
		var b = buttons[cpt];
		var button = EditorToolbar.buttons[b];
		if (button) {
			if (button.separator)
			else {
				var cell=createTiddlyElement(row,"td",null,b+"Button");
				if (button.onCreate) button.onCreate.call(this, cell, b);
				else EditorToolbar.createButton.call(this, cell, b);

EditorToolbar.createButton = function(place,name){
	this.elements[name] = createTiddlyButton(place,EditorToolbar.buttons[name].label,EditorToolbar.buttons[name].toolTip,contextualCallback(this,EditorToolbar.onCommand(name)),"button");

EditorToolbar.onCommand = function(name){
	var button = EditorToolbar.buttons[name];
	return function(){
		var parameter = false;
		if (button.prompt) {
			var parameter = this.target.queryCommandValue(name);
			parameter = prompt(button.prompt,parameter);
		if (parameter != null) {
			this.target.execCommand(name, false, parameter);
		return false;

EditorToolbar.getCommandState = function(target,name){
	try {return target.queryCommandState(name)}
	catch(e){return false}

EditorToolbar.onRefreshButton = function (name){
	if (EditorToolbar.getCommandState(this.target,name)) addClass(this.elements[name].parentNode,"buttonON");
	else removeClass(this.elements[name].parentNode,"buttonON");

EditorToolbar.onUpdateButton = function(){
	for (b in this.elements) 
		if (EditorToolbar.buttons[b].onRefresh) EditorToolbar.buttons[b].onRefresh.call(this,b);
		else EditorToolbar.onRefreshButton.call(this,b);

EditorToolbar.buttons = {
	separator : {separator : true},
	bold : {label:"B", toolTip : "Bold"},
	italic : {label:"I", toolTip : "Italic"},
	underline : {label:"U", toolTip : "Underline"},
	strikethrough : {label:"S", toolTip : "Strikethrough"},
	insertunorderedlist : {label:"\u25CF", toolTip : "Unordered list"},
	insertorderedlist : {label:"1.", toolTip : "Ordered list"},
	justifyleft : {label:"[\u2261", toolTip : "Align left"},
	justifyright : {label:"\u2261]", toolTip : "Align right"},
	justifycenter : {label:"\u2261", toolTip : "Align center"},
	justifyfull : {label:"[\u2261]", toolTip : "Justify"},
	removeformat : {label:"\u00F8", toolTip : "Remove format"},
	fontsize : {label:"\u00B1", toolTip : "Set font size", prompt: "Enter font size"},
	forecolor : {label:"C", toolTip : "Set font color", prompt: "Enter font color"},
	fontname : {label:"F", toolTip : "Set font name", prompt: "Enter font name"},
	heading : {label:"H", toolTip : "Set heading level", prompt: "Enter heading level (example : h1, h2, ...)"},
	indent : {label:"\u2192[", toolTip : "Indent paragraph"},
	outdent : {label:"[\u2190", toolTip : "Outdent paragraph"},
	inserthorizontalrule : {label:"\u2014", toolTip : "Insert an horizontal rule"},
	insertimage : {label:"\u263C", toolTip : "Insert image", prompt: "Enter image url"}

EditorToolbar.buttonsList = "bold,italic,underline,strikethrough,separator,increasefontsize,decreasefontsize,fontsize,forecolor,fontname,separator,removeformat,separator,insertparagraph,insertunorderedlist,insertorderedlist,separator,justifyleft,justifyright,justifycenter,justifyfull,indent,outdent,separator,heading,separator,inserthorizontalrule,insertimage";

if (config.browser.isGecko) {
	EditorToolbar.buttons.increasefontsize = {onCreate : EditorToolbar.createButton, label:"A", toolTip : "Increase font size"};
	EditorToolbar.buttons.decreasefontsize = {onCreate : EditorToolbar.createButton, label:"A", toolTip : "Decrease font size"};
	EditorToolbar.buttons.insertparagraph = {label:"P", toolTip : "Format as paragraph"};


geckoEditor.setupFrame = function(iframe,height,callback) {
	iframe.setAttribute("style","width: 100%; height:" + height);

geckoEditor.plugEvents = function(doc,onchange){
	doc.addEventListener("keyup", onchange, true);
	doc.addEventListener("keydown", onchange, true);
	doc.addEventListener("click", onchange, true);

geckoEditor.postProcessor = function(text){return text};

geckoEditor.preProcessor = function(text){return easyEditor.SimplePreProcessoror(text)}

geckoEditor.isDocReady = function() {return true;}


geckoEditor.initContent = function(doc,content){
	if (content) doc.execCommand("insertHTML",false,geckoEditor.preProcessor(content));

IEeditor.setupFrame = function(iframe,height,callback) {
	iframe.width="99%";  //IE displays the iframe at the bottom if 100%. CSS layout problem ? I don't know. To be studied...

IEeditor.plugEvents = function(doc,onchange){
	doc.attachEvent("onkeyup", onchange);
	doc.attachEvent("onkeydown", onchange);
	doc.attachEvent("onclick", onchange);

IEeditor.isDocReady = function(doc){
	if (doc.readyState!="complete") return false;
	if (!doc.body) return false;
	return (doc && doc.getElementsByTagName && doc.getElementsByTagName("head") && doc.getElementsByTagName("head").length>0);

IEeditor.postProcessor = function(text){return text};

IEeditor.preProcessor = function(text){return easyEditor.SimplePreProcessoror(text)}


IEeditor.initContent = function(doc,content){
	if (content) doc.body.innerHTML=IEeditor.preProcessor(content);
function contextualCallback(obj,func){
    return function(){return func.call(obj)}
Story.prototype.previousGatherSaveEasyEdit = Story.prototype.previousGatherSaveEasyEdit ? Story.prototype.previousGatherSaveEasyEdit : Story.prototype.gatherSaveFields; // to avoid looping if this line is called several times
Story.prototype.gatherSaveFields = function(e,fields){
	if(e && e.getAttribute) {
		var f = e.getAttribute("easyEdit");
			var newVal = config.macros.easyEdit.gather(e);
			if (newVal) fields[f] = newVal;
		this.previousGatherSaveEasyEdit(e, fields);

	text: "write",
	tooltip: "Edit this tiddler in wysiwyg mode",
	readOnlyText: "view",
	readOnlyTooltip: "View the source of this tiddler",
	handler : function(event,src,title) {
		var tiddlerElem = document.getElementById(story.idPrefix + title);
		var fields = tiddlerElem.getAttribute("tiddlyFields");
		return false;

config.shadowTiddlers.ViewTemplate = config.shadowTiddlers.ViewTemplate.replace(/\+editTiddler/,"+editTiddler easyEdit");

config.shadowTiddlers.EasyEditTemplate = config.shadowTiddlers.EditTemplate.replace(/macro='edit text'/,"macro='easyEdit text'");

config.shadowTiddlers.EasyEditToolBarStyleSheet = "/*{{{*/\n";
config.shadowTiddlers.EasyEditToolBarStyleSheet += ".easyEditorToolBar {font-size:0.8em}\n" ;
config.shadowTiddlers.EasyEditToolBarStyleSheet += ".editor iframe {border:1px solid #DDD}\n" ;
config.shadowTiddlers.EasyEditToolBarStyleSheet += ".easyEditorToolBar td{border:1px solid #888; padding:2px 1px 2px 1px; vertical-align:middle}\n" ;
config.shadowTiddlers.EasyEditToolBarStyleSheet += ".easyEditorToolBar td.separator{border:0}\n" ;
config.shadowTiddlers.EasyEditToolBarStyleSheet += ".easyEditorToolBar .button{border:0;color:#444}\n" ;
config.shadowTiddlers.EasyEditToolBarStyleSheet += ".easyEditorToolBar .buttonON{background-color:#EEE}\n" ;
config.shadowTiddlers.EasyEditToolBarStyleSheet += ".easyEditorToolBar {margin:0.25em 0}\n" ;
config.shadowTiddlers.EasyEditToolBarStyleSheet += ".easyEditorToolBar .boldButton {font-weight:bold}\n" ;
config.shadowTiddlers.EasyEditToolBarStyleSheet += ".easyEditorToolBar .italicButton .button {font-style:italic;padding-right:0.65em}\n" ;
config.shadowTiddlers.EasyEditToolBarStyleSheet += ".easyEditorToolBar .underlineButton .button {text-decoration:underline}\n" ;
config.shadowTiddlers.EasyEditToolBarStyleSheet += ".easyEditorToolBar .strikeButton .button {text-decoration:line-through}\n" ;
config.shadowTiddlers.EasyEditToolBarStyleSheet += ".easyEditorToolBar .unorderedListButton {margin-left:0.7em}\n" ;
config.shadowTiddlers.EasyEditToolBarStyleSheet += ".easyEditorToolBar .justifyleftButton .button {padding-left:0.1em}\n" ;
config.shadowTiddlers.EasyEditToolBarStyleSheet += ".easyEditorToolBar .justifyrightButton .button {padding-right:0.1em}\n" ;
config.shadowTiddlers.EasyEditToolBarStyleSheet += ".easyEditorToolBar .justifyfullButton .button, .easyEditorToolBar .indentButton .button, .easyEditorToolBar .outdentButton .button {padding-left:0.1em;padding-right:0.1em}\n" ;
config.shadowTiddlers.EasyEditToolBarStyleSheet += ".easyEditorToolBar .increasefontsizeButton .button {padding-left:0.15em;padding-right:0.15em; font-size:1.3em; line-height:0.75em}\n" ;
config.shadowTiddlers.EasyEditToolBarStyleSheet += ".easyEditorToolBar .decreasefontsizeButton .button {padding-left:0.4em;padding-right:0.4em; font-size:0.8em;}\n" ;
config.shadowTiddlers.EasyEditToolBarStyleSheet += ".easyEditorToolBar .forecolorButton .button {color:red;}\n" ;
config.shadowTiddlers.EasyEditToolBarStyleSheet += ".easyEditorToolBar .fontnameButton .button {font-family:serif}\n" ;
config.shadowTiddlers.EasyEditToolBarStyleSheet +="/*}}}*/";

store.addNotification("EasyEditToolBarStyleSheet", refreshStyles); 

config.shadowTiddlers.EasyEditDocStyleSheet = "/*{{{*/\n \n/*}}}*/";
if (config.annotations) config.annotations.EasyEditDocStyleSheet = "This stylesheet is applied when editing a text with the wysiwyg easyEditor";

!Link button add-on
EditorToolbar.createLinkButton = function(place,name) {
	this.elements[name] = createTiddlyButton(place,EditorToolbar.buttons[name].label,EditorToolbar.buttons[name].toolTip,contextualCallback(this,EditorToolbar.onInputLink()),"button");

EditorToolbar.onInputLink = function() {
	return function(){
		var browser = config.browser.isGecko ? geckoEditor : (config.browser.isIE ? IEeditor : null);
		var value = browser ? browser.getLink(this.target) : "";
		value = prompt(EditorToolbar.buttons["createlink"].prompt,value);
		if (value) browser.doLink(this.target,value);
		else if (value=="") this.target.execCommand("unlink", false, value);
		return false;

EditorToolbar.buttonsList += ",separator,createlink";

EditorToolbar.buttons.createlink = {onCreate : EditorToolbar.createLinkButton, label:"L", toolTip : "Set link", prompt: "Enter link url"};

	var range=doc.defaultView.getSelection().getRangeAt(0);
	var container = range.commonAncestorContainer;
	var node = (container.nodeType==3) ? container.parentNode : range.startContainer.childNodes[range.startOffset];
	if (node && node.tagName=="A") {
		var r=doc.createRange();
		return (node.getAttribute("tiddler") ? "#"+node.getAttribute("tiddler") : node.href);
	else return (container.nodeType==3 ? "#"+container.textContent.substr(range.startOffset, range.endOffset-range.startOffset).replace(/ $/,"") : "");

geckoEditor.doLink=function(doc,link){ // store tiddler in a temporary attribute to avoid url encoding of tiddler's name
	var pin = "href"+Math.random().toString().substr(3);
	doc.execCommand("createlink", false, pin);
	var isTiddler=(link.charAt(0)=="#");
	var node = doc.defaultView.getSelection().getRangeAt(0).commonAncestorContainer;
	var links= (node.nodeType!=3) ? node.getElementsByTagName("a") : [node.parentNode];
	for (var cpt=0;cpt<links.length;cpt++) 
			if (links[cpt].href==pin){
				links[cpt].href=isTiddler ? "javascript:;" : link; 
				links[cpt].setAttribute("tiddler",isTiddler ? link.substr(1) : "");

geckoEditor.beforeLinkPostProcessor = geckoEditor.beforelinkPostProcessor ? geckoEditor.beforelinkPostProcessor : geckoEditor.postProcessor;
geckoEditor.postProcessor = function(text){
	return geckoEditor.beforeLinkPostProcessor(text).replace(/<a tiddler="([^"]*)" href="javascript:;">(.*?)(?:<\/a>)/gi,"[[$2|$1]]").replace(/<a tiddler="" href="/gi,'<a href="');

geckoEditor.beforeLinkPreProcessor = geckoEditor.beforeLinkPreProcessor ? geckoEditor.beforeLinkPreProcessor : geckoEditor.preProcessor
geckoEditor.preProcessor = function(text){
	return geckoEditor.beforeLinkPreProcessor(text).replace(/\[\[([^|\]]*)\|([^\]]*)]]/g,'<a tiddler="$2" href="javascript:;">$1</a>');

	var node=doc.selection.createRange().parentElement();
	if (node.tagName=="A") return node.href;
	else return (doc.selection.type=="Text"? "#"+doc.selection.createRange().text.replace(/ $/,"") :"");

	doc.execCommand("createlink", false, link);

IEeditor.beforeLinkPreProcessor = IEeditor.beforeLinkPreProcessor ? IEeditor.beforeLinkPreProcessor : IEeditor.preProcessor
IEeditor.preProcessor = function(text){
	return IEeditor.beforeLinkPreProcessor(text).replace(/\[\[([^|\]]*)\|([^\]]*)]]/g,'<a ref="#$2">$1</a>');

IEeditor.beforeLinkPostProcessor = IEeditor.beforelinkPostProcessor ? IEeditor.beforelinkPostProcessor : IEeditor.postProcessor;
IEeditor.postProcessor = function(text){
	return IEeditor.beforeLinkPostProcessor(text).replace(/<a href="#([^>]*)">([^<]*)<\/a>/gi,"[[$2|$1]]");

IEeditor.beforeLinkInitContent = IEeditor.beforeLinkInitContent ? IEeditor.beforeLinkInitContent : IEeditor.initContent;
IEeditor.initContent = function(doc,content){
	var links=doc.body.getElementsByTagName("A");
	for (var cpt=0; cpt<links.length; cpt++) {
		links[cpt].href=links[cpt].ref; //to avoid IE conversion of relative URLs to absolute

config.shadowTiddlers.EasyEditToolBarStyleSheet += "\n/*{{{*/\n.easyEditorToolBar .createlinkButton .button {color:blue;text-decoration:underline;}\n/*}}}*/";

config.shadowTiddlers.EasyEditDocStyleSheet += "\n/*{{{*/\na {color:#0044BB;font-weight:bold}\n/*}}}*/";

<div class='toolbar' macro='toolbar [[ToolbarCommands::EditToolbar]]'></div>
<div class='title' macro='view title'></div>
<div class='editor' macro='edit title'></div>
<div macro='annotations'></div>
<div class='editor' macro='edit text'></div>
<div class='editor' macro='edit tags'></div><div class='editorFooter'><span macro='message views.editor.tagPrompt'></span><span macro='tagChooser excludeLists'></span></div>
Ecotricity is an ethical energy company, being both an independent generator of renewable electricity and an electricity and gas supplier to circa 40,000 domestic and business customers. The companys business has been concerned with building wind turbines and connecting them to the grid. It has built 15 wind parks across the UK, including a wind turbine on the outskirts of Cardiat G24i (a solar panel manufacturer).

Ecotricity wants to develop a distributed energy storage system, a black box to store electricity at consumer level. Consumer demand and wind-generated energy have peaks and troughs that do not match. Currently Ecotricity sells electricity at low value during high wind generation and buys it at high value to fill the gaps, hence peak demand is met by the dirtiest power generation and Ecotricity cannot maximise its profits. A distributed energy storage system could flatten the demand curve and sell power on demand, enabling better value for Ecotricity. Flattening the grid/consumer demand profile would reduce need for dirty power capacity, furthering Ecotricity's ethical stance and supporting the UK's green agenda, possibly even helping it cope with the loss of the 8GW of coal red power stations that were earmarked for closure by 2015.

Ecotricity will seek the approval of the industry regulator and others before introducing energy exporting black boxes into the market (and into customer's homes), and is currently carrying out a feasibility study into the idea as a whole. To assist in doing all of these, a number of questions need to be answered. Assuming technology is available to store then distribute energy locally, what is the effect on the stability of the National Grid of introducing hundreds of thousands of black box energy stores? The AC output from all the large generators currently has to be synchronised anyway, in terms of peak voltage, frequency and phase angle. Are the current standards (G83) for Grid Tied Inverters (that convert DC into AC) suffcient to ensure the introduction of an unprecedented number of energy stores or micro generators onto the grid will not affect its stability?
!!Background: Silicon melting

In a metallurgical process at Elkem there is a need for efficient melting of silicon.
Some relevant data for Si (Wikipedia):
Melting point: 1414 °C
Heat capacity at 25 °C: 19.8 J/mole K = 0.70 kJ/kg K
Heat of fusion (melting): 50.21 kJ/mole = 1788 kJ/kg
Liquid density (at melting point): 2 570 kg/m3
Solid density (at room T): 2 329 kg/m3
Hence, silicon melting requires a lot of energy. More energy is needed for melting than to heat Si to the melting point. Solid Si is less dense than liquid Si. Hence, solid particles will float.

As a comparison, ice/water has the following properties:
Melting point: 0 °C
Heat capacity at -10 °C: 2.05 kJ/kg K
Heat capacity at 20 °C: 4.18 kJ/kg K
Heat of fusion (melting): 334 kJ/kg
Liquid density (at melting point): 998 kg/m3
Solid density (at melting point): 917 kg/m3
One option for melting Si is to use an induction furnace. A crucible is surrounded by an electric coil. While solid silicon is a semiconductor, molten Si is a metal with high electric conductivity. The liquid Si can therefore be heated by induction currents caused by a high alternating current in the coil. Depending on the conductivity of the crucible material, a small or large fraction of the power can be induced in the crucible.

Assuming that a high fraction of the power is induced directly in the silicon metal, the melt will be strongly influenced by electromagnetic forces in a thin boundary layer. The bulk part of the time averaged force is inward directed and conservative (irrotational). This body force is the source of a magnetic pressure that will form a meniscus at the top, c.f. figure 1.

{{c{Figure 1 – Meniscus and typical flow patterns in an induction heated furnace}}}

At the top and the bottom of the silicon region, the strength of the magnetic field will vary along the boundary. This variation implies a tangential rotational component of the time averaged magnetic force. The force direction is towards higher values of the magnetic field. This tangential force drives a fluid motion within the boundary layer, resulting in convection loops as shown in figure 1, c.f. [1].

Cold, solid silicon particles can be feed continuously or batch wise.

!!Study Group Problem

Melting will normally start with a certain amount of molten silicon in the crucible. In the beginning of a melting cycle a high fraction of the power is induced in the crucible above the silicon level. The resulting high maximum temperature in this region is a limiting factor for the power input.

Towards the end of the cycle a higher fraction of the power is induced directly in the metal. The power induced in the crucible is distributed almost evenly along the height, within the region covered by the coil. Hence, more power can be induced now.

To increase the melting rate more power is needed in the melting region at the top of the silicon. One limiting factor seems to be heat transfer to the molten silicon.

If the induction frequency is lowered, a higher fraction of the power will be induced directly in the metal. Then more power can be applied without getting higher temperatures than before in the crucible walls.

As part of an evaluation on how to improve melting capacity, Elkem wants improved insight about other consequences of lower frequency and higher power input. The ESGI discussions should focus on changes in the fluid flow and in the melting at the top of the liquid silicon.

!!Some issues:

* Height and shape of the meniscus. Preliminary estimates indicate a height that will vary from 0 to some 0.4 m during operation, after reducing the frequency.
* Stability of the meniscus.
* Electromagnetic stirring.
* Flow stability, splashing.
* Influence of solid Si-particles, including the amount of un-melted Si.
* Melting rate.
* Heat transfer.

The main objective is to estimate the various consequences of reduced frequency and evaluate whether they are positive, negative or neutral with respect to efficient melting of silicon.


[1] P.A. Davidson: An Introduction to Magnetohydrodynamics, Cambridge Texts in Applied Mathematics
Mid-Western Regional Hospital

This study group problem looks at modelling the consequences of reducing the amount of O-negative blood stored at a group of six hospitals in the Limerick region. O-negative blood must be carefully rationed: it can be transfused into any patient regardless of blood group and is therefore used in emergencies. However, the Irish blood transfusion service is concerned that O-negative donors are being asked to donate too often and would like to reduce the amount of O- negative blood used. Blood is stored in units of about 250ml and can be stored for 35 days but in practical terms, the blood provided to Limerick generally has at least 21 days of storage life left on it. Currently each of the six hospitals in the group holds a fixed number, between two to five, of units of O -negative blood at any one time. This study group problem aims to quantify and minimise risks to patients in the Limerick region under a new regime in which less O negative blood is stored within the region at any one time. The problem is further complicated as one of the hospitals in the region acts as a hub and collects all the O-negative blood from the other hospitals 10 days before its expiry date.

Given a dataset of historical blood use at each of the six hospitals, the group is asked to:

1) Estimate the level of risk to patients associated with a proposed new blood storage scheme

2) Determine whether a lower level of risk can be achieved without increasing the number of units of blood.
Crystal Energy

Crystal Energy purchases power from the Single Electricity Market, and supplies it to customers at half-hourly variable rates (in contrast to the flat rates available from national suppliers). Although this exposes customers to volatile prices, it moves them away from averaged tariffs and allows them to avoid high price times by moving the timing of their loads (Demand Side Management). Crystal Energy are interested in identifying potential customers whose load profile would allow the achievement of substantial savings, depending on their risk appetite. This requires some statistical analysis of price information and price forecasts, along with modelling of possible customer load profiles. Another problem arises when rescheduling thermal loads to times of low prices: this requires the use of additional electricity to account for an associated thermal energy loss. Efficient optimisation methods for scheduling (taking account of uncertainty), coupled with mathematical modelling of the thermal load, are desirable here.
In electrical smelting, current is supplied through electrodes which are gradually consumed. These electrodes are made of paste which flows under increasing temperature. The paste is charged by adding paste briquettes on top of the electrode which soften and flow to form a dense fluid which is baked to form the solid electrode. When producing electrodes rapidly the softening/flow process can be incomplete due to trapped air leading to breakages.

The objective of the Study Group is a mathematical analysis of this process which will allow a transfer of laboratory measurements to conditions for real electrodes.
See the figures: [[PS|p/esgi/44/NKT.ps]] or [[PDF|p/esgi/44/NKT.pdf]].

Photonic crystals (PCs) are periodic dielectric structures that enable an efficient control of optical electromagnetic signals (light) in geometries with features on the sub-wavelength scale; a control that is not possible in classical optics based on total internal reflection and refraction of light rays. In photonic crystals a periodic photonic potential can induce a photonic bandgap, i.e.; a range of frequencies where radiation energy in certain polarisation states is not allowed to flow in specific directions. This is the photonic analogue of the forbidden energy band for electrons in an electric periodic potential in the lattice of atoms of a semiconductor material.

Photonic crystals enable new advanced all-optical signal processing in regions with dimensions of a few cubic wavelengths because radiation losses can be greatly reduced; something that is impossible in classical optical components without serious energy loss.

Electromagnetic phenomena are governed by Maxwell’s equations and a set of boundary conditions that the electric vector field `E` and the magnetic vector field `H` must satisfy at material interfaces.

Figure 1 shows a planar cut of a two-dimensional (2D) photonic crystal made of a background material of relative dielectric constant `e_{r1}` in which a triangular lattice of holes is etched. For silicon nitride the relative dielectric constant is assumed to be 4 at the wavelengths of interest, and that of air is 1. The lattice constant is L and the hole radius is `r_0`. The eigenstates of this 2D structure can be categorised as `H` states (the magnetic field vector is parallel to the 2D plane and the electric field vector is perpendicular to the same) or `E` states (the electric field vector is parallel to the 2D plane and the magnetic field vector is perpendicular to the same). Banddiagrams usually present eigenfrequencies, `w=L/l`, versus `k` vector values, where `l` is the free-space wavelength and `k` is the Bloch mode propagation vector. The chosen `k` vectors usually lie on the contour of the irreducible Brillouin zone in order to determine the existence of bandgaps (the smallest bandgap is determined by the highest photon energies). The preferred regime of operation is typically for `w<1`.

We assume that only `E` states are present in the configuration.
Given Intercepted Radio Frequency (RF) emissions, provide a prediction of the number of underlying source platforms and the association between the emissions and platforms.

Zoning problem in determining environmental impacts of traffic

Traffic has impacts on the environment. Traffic models are normally used to calculate the traffic intensities on the road network. The calculated traffic intensities form the basis for calculating the environmental impacts. In the impact assessment the traffic intensities are are converted to width of the zone that has negative impacts of the traffic. Examples of relevant environmental impacts are air and noise pollution. The impacted area can be considered as a buffer zone around the road network . To determine what the effects of the impact zone are, it is necessary to calculate the number of houses, inhabitants, the size of the nature area and so forth.

The calculation has two steps:
# Determine the shape of the buffer zone as a polygon
# Determine the overlay area with other polygons

Step 1) Determining the shape of the buffer zone

* A collection of road segments with for each segment the following information:
** De position of the road segment as a collection of X,Y coordinates. By connecting the coordinates a poly-line is formed.
** The width of the buffer zone, both on the left and right side of the road segment (due to for example noise abatement constructions the width of the zone may be different on the left and the right hand side)

''Desired outputs:''
* For each road segment the buffer zone as a collection of X,Y coordinates. By connecting the points a (closed) polygon is formed.
* The buffer zone stretches from the road segment to the width specified as buffer, and is bordered by the buffer zones before and after the road segments
* Buffer zones are not allowed to overlap.

!Possible approaches to solve the zoning problem
* Use circles with a radius of the width of the buffer zone and follow the poly-line in small steps until the last point. Then construct the buffer zone, taking into account the rules given above.
* Use a parallel line at the specified distance and use the poly-line to construct the proper buffer zone. Al illustrative example is given in figure 1. In the figure it is obvious that the parallel lines at distance `A_{L,i}` and `A_{R,i}` form the starting point. Due the angles between the lines that make up the poly-line and the road segments special solutions are needed to make up the proper buffer zone. In the figure on right hand side of the road segment pieces need to be removed, while on the left hand side area needs to be added. Special cases include road crossings and turn-offs.
{{c{Figure 1 Illustrative example of a buffer zone}}}

The purpose for step 1) is to develop an algorithm that determines the buffer zone as polygon.

An additional problem is how to eliminate the overlap in the buffer zones in sharp turns, where multiple buffer zones may overlap.

''Step 2 Determining the overlay''
Once the buffer zone has been determined, the next step is to calculate the overlapping area with other polygons such as nature area. For the left hand buffer and the right hand buffer a separate calculation is needed of the overlapping area.

In principle the calculated buffer zone could be exported to Geographic Information System (GIS) to calculate the overlay there. However due to dynamic and interactive character of the traffic model a built-in overlay algorithm would be preferable.

* Polygon representing the buffer zone.
* Polygon representing the other area of interest

''Desired outputs:''
* Polygon with overlay area that lies in the buffer zone and the area of interest.

The purpose of step 2) is to develop an overlay algorithm to determine the overlap between the buffer zone and the area of interest. The algorithm should be easy to implement in the traffic model.
* 2010, Oct 4-8: Trabzon (Turkey). [[EASGI 1|EASGI 1]]
|Study Group|Place|Date|
|[[ESGI 26|ESGI 26]]|Nottingham (UK)|1993, Mar 29- Apr2|
|[[ESGI 27|ESGI 27]]|Glasgow (UK)|1994, Mar 21-25|
|[[ESGI 28|ESGI 28]]|Cambridge (UK)|1995, Mar 22-21|
|[[ESGI 29|ESGI 29]]|Oxford (UK)|1996, Mar 18-22|
|[[ESGI 30|ESGI 30]]|Bath (UK)|1997, Apr 7-11|
|[[ESGI 31|ESGI 31]]|Southampton (UK)|1998, Mar 22-27|
|[[ESGI 32|ESGI 32]]|Lyngby (Denmark)|1998, Aug 31- Sep4|
|[[ESGI 33|ESGI 33]]|Leiden (Netherlands)|1999, Sep 14-18|
|[[ESGI 34|ESGI 34]]|Edinburgh (UK)|1999, Apr 6-9|
|[[ESGI 35|ESGI 35]]|Odense (Denmark)|1999, Feb 23-27|
|[[ESGI 36|ESGI 36]]|Eindhoven (Netherlands)|1999, Nov 15-19|
|[[ESGI 37|ESGI 37]]|Sheffield (UK)|2000, Apr 10-14|
|[[ESGI 38|ESGI 38]]|Lyngby (Denmark)|2000, Jun 19-23|
|[[ESGI 39 (SWI 2000)|ESGI 39 (SWI 2000)]]|Twente (Netherlands)|2000, Oct 9-13|
|[[ESGI 40|ESGI 40]]|Keele (UK)|2001, Apr 9-12|
|[[ESGI 41|ESGI 41]]|Odense (Denmark)|2001, Aug 13-17|
|[[ESGI 42|ESGI 42]]|Amsterdam (Netherlands)|2002, Feb 18-22|
|[[ESGI 43|ESGI 43]]|Lancaster (UK)|2002, Apr 2-5|
|[[ESGI 44|ESGI 44]]|Denmark (Denmark)|2002, Aug 19-23|
|[[ESGI 45|ESGI 45]]|Leiden (Netherlands)|2003, Feb 17-21|
|[[ESGI 46|ESGI 46]]|Bristol (UK)|2003, Mar 31- Apr4|
|[[ESGI 47|ESGI 47]]|Sønderborg (Denmark)|2003, Aug 24-29|
|[[ESGI 48|ESGI 48]]|Delft (Netherlands)|2004, Mar 15-19|
|[[ESGI 49|ESGI 49]]|Oxford (UK)|2004, Mar 28- Apr2|
|[[ESGI 50|ESGI 50]]|Helsinki (Finland)|2004, May 24-28|
|[[ESGI 51|ESGI 51]]|Lyngby (Denmark)|2004, Aug 16-20|
|[[ESGI 52 (SWI 2005)|ESGI 52 (SWI 2005)]]|Amsterdam (Netherlands)|2005, Jan 31- Feb4|
|[[ESGI 53|ESGI 53]]|Manchester (UK)|2005, Mar 21-24|
|[[ESGI 54|ESGI 54]]|Odense (Denmark)|2005, Aug 15-19|
|[[ESGI 55 (SWI 2006)|ESGI 55 (SWI 2006)]]|Eindhoven (Netherlands)|2006, Jan 30- Feb3|
|[[ESGI 56|ESGI 56]]|Bath (UK)|2006, Apr 3-7|
|[[ESGI 57|ESGI 57]]|Denmark (Denmark)|2006, Aug 14-18|
|[[ESGI 58 (SWI 2007)|ESGI 58 (SWI 2007)]]|Utrecht (Netherlands)|2007, Jan 29- Feb2|
|[[ESGI 59|ESGI 59]]|Nottingham (UK)|2007, Mar 26-30|
|[[ESGI 60|ESGI 60]]|Lisboa (Portugal)|2007, Apr 13-19|
|[[ESGI 61|ESGI 61]]|Sønderborg (Denmark)|2007, Aug 13-17|
|[[ESGI 62|ESGI 62]]|Limerick (Ireland)|2008, Jan 21-25|
|[[ESGI 63 (SWI 2008)|ESGI 63 (SWI 2008)]]|Twente (Netherlands)|2008, Jan 28- Feb1|
|[[ESGI 64|ESGI 64]]|Heriot-Watt (UK)|2008, Apr 7-11|
|[[ESGI 65|ESGI 65]]|Porto (Portugal)|2008, Apr 21-24|
|[[ESGI 66|ESGI 66]]|Lyngby (Denmark)|2008, Aug 18-22|
|[[ESGI 67 (SWI 2009)|ESGI 67 (SWI 2009)]]|Wageningen (Netherlands)|2009, Jan 26-30|
|[[ESGI 68|ESGI 68]]|Southampton (UK)|2009, Mar 30- Apr3|
|[[ESGI 69|ESGI 69]]|Coimbra (Portugal)|2009, Apr 20-24|
|[[ESGI 70|ESGI 70]]|Limerick (Ireland)|2009, Jun 28- Jul3|
|[[ESGI 71|ESGI 71]]|Southern Denmark (Denmark)|2009, Aug 17-21|
|[[ESGI 72 (SWI 2010)|ESGI 72 (SWI 2010)]]|Amsterdam (Netherlands)|2010, Jan 25-29|
|[[ESGI 73|ESGI 73]]|Warwick (UK)|2010, Apr 12-16|
|[[ESGI 74|ESGI 74]]|Aveiro (Portugal)|2010, Apr 26-30|
|[[ESGI 75|ESGI 75]]|Limerick (Ireland)|2010, Jun 27- Jul2|
|[[ESGI 76|ESGI 76]]|Lyngby (Denmark)|2010, Aug 16-20|
|[[ESGI 77|ESGI 77]]|Warsaw (Poland)|2010, Sep 27- Oct1|
|[[ESGI 78|ESGI 78]]|Barcelona (Spain)|2010, Jul 6-8|
|[[ESGI 79 (SWI 2011)|ESGI 79 (SWI 2011)]]|Amsterdam (Netherlands)|2011, Jan 24-28|
|[[ESGI 80|ESGI 80]]|Cardiff (UK)|2011, Apr 4-8|
|[[ESGI 81|ESGI 81]]|Lisbon (Portugal)|2011, May 23-27|
|[[ESGI 82|ESGI 82]]|Limerick (Ireland)|2011, Jun 26- Jul1|
|[[ESGI 83|ESGI 83]]|Sønderborg (Denmark)|2011, Aug 15-19|
Road accesses at terminal curbsides have a great importance in passengers experience at airport arrivals and departures. Waiting time for transportation, long pedestrian paths and lack of information may cause discomfort and bewilderment. In an effort to improve private and public transportation at the front of airport terminals, an evaluation on capacity and level of service should be carried out, for taxis at arrivals and private vehicles at departures, considering an angle parking at the curbside.
A 3D-scanner measures distances from a plane or cylinder to points on 3D-objects. This gives a point cloud with up to several million points.
The problem is automatically to find features in such an image. Features could be objects like the centerline of ridges and valleys, and peaks and holes.

A more detailed description can be found as a [[postscript file|p/esgi/32/SCANtechnology.ps]].
* 2006, Aug 14-18: Toronto (Canada). [[FM-IPSW 2006|FM-IPSW 2006]]
* 2008, Aug 11-15: Toronto (Canada). [[FM-IPSW 2008|FM-IPSW 2008]]
* 2010, Aug 16-20: Toronto (Canada). [[FM-IPSW 2010|FM-IPSW 2010]]
Ice cream is a four phase system comprising ice, fat, air and an aqueous phase. We would like to determine under what conditions ice cream will flow at low temperatures, approximately -5°C to -25°C.

For a given ice cream formulation and temperature we can establish the viscosity of the continuous phase and the phase volume of solid dispersed phase. (Note that for a given formulation, changes in temperature will change the concentration of the continuous phase and the phase volume of ice.) From the above, we wish to calculate the bulk ice cream viscosity, which may contain a given phase volume or air. This system is then held in some sort of container for a given time. A pressure is applied (possible by only gravity and the weight of the system) and the ice cream is extruded through an orifice. We want to know what the flow rate is through the orifice.

In other words, we would like to model the flow rate of the material from the orifice as a function of viscosity of continuous phase, phase volume of solid phase, ratio of these two, phase volume of air, applied pressure and orifice size. Due to the complexity of this problem, achieving a full mathematical description may not be realistic. An understanding of the relative importance of the factors would be a useful first step. 
Cork Plastics

Cork Plastics produces fascia boards that are used on roofs. Their objective is to produce the elements with as little material as possible. Consequently, there are sometimes defaults and the group will be asked to determine how they can be avoided. The fabrication process may be summarised as follows:
*PVC compound in the form of fine powder is heated in an extruder by friction and compression between the screw and barrel and conducted heat from the barrel wallsto a temperature of around 175 degrees Celsius.
*The resulting melted material (chewing gum consistency) is forced through a carefully shaped die as shown in the figure below.
*On exit from the die (pressure drop) the blowing agents (which are held in solution in the melt, under pressure, in the machine) turn into a gas and cause the PVC melt to become a foam which quickly expands to fill the void in the centre of the profile. If you imagine the melt coming out of the die in the shape of a pipe, it then expands inwards to create a solid rod with the same OD as the pipe.
[img[p/esgi/82/img/Cork Plastics_fig1.jpg]]
*The top of the product is coated with material that gives it a smooth, glossy weatherable surface.
*The PVC is then cooled down to solidify the foam from the outside inwards. For the first few centimetres, the outside shape remains in a stainless steel calibrator so its shape does not change. It then moves slowly through a cold water bath. Details about this process may be found in the attached documents. The foam should completely fill the space at the end of the process.
Defaults occur at the corners where air bubbles can form or in the central flat part where the foam layers growing from the top and bottom molten PVC layers do not join correctly, see pictures below. The group should work out how production parameters (temperature, speed of production...) should be adjusted to avoid these defaults without increasing the density of the board.
[img[p/esgi/82/img/Cork Plastics_fig2.jpg]]
|Created by|SaqImtiaz|
Resize tiddler text on the fly. The text size is remembered between sessions by use of a cookie.
You can customize the maximum and minimum allowed sizes.
(only affects tiddler content text, not any other text)

Also, you can load a TW file with a font-size specified in the url.
Eg: http://tw.lewcid.org/#font:110

Try using the font-size buttons in the sidebar, or in the MainMenu above.

Copy the contents of this tiddler to your TW, tag with systemConfig, save and reload your TW.
Then put {{{<<fontSize "font-size:">>}}} in your SideBarOptions tiddler, or anywhere else that you might like.

{{{<<fontSize>>}}} results in <<fontSize>>
{{{<<fontSize font-size: >>}}} results in <<fontSize font-size:>>

The buttons and prefix text are wrapped in a span with class fontResizer, for easy css styling.
To change the default font-size, and the maximum and minimum font-size allowed, edit the config.fontSize.settings section of the code below.

This plugin assumes that the initial font-size is 100% and then increases or decreases the size by 10%. This stepsize of 10% can also be customized.

*27-07-06, version 1.0 : prevented double clicks from triggering editing of containing tiddler.
*25-07-06,  version 0.9



//configuration settings
config.fontSize.settings =
            defaultSize : 100,  // all sizes in %
            maxSize : 200,
            minSize : 40,
            stepSize : 10

//startup code
var fontSettings = config.fontSize.settings;

if (!config.options.txtFontSize)
            {config.options.txtFontSize = fontSettings.defaultSize;
setStylesheet(".tiddler .viewer {font-size:"+config.options.txtFontSize+"%;}\n","fontResizerStyles");
setStylesheet("#contentWrapper .fontResizer .button {display:inline;font-size:105%; font-weight:bold; margin:0 1px; padding: 0 3px; text-align:center !important;}\n .fontResizer {margin:0 0.5em;}","fontResizerButtonStyles");

config.macros.fontSize.handler = function (place,macroName,params,wikifier,paramString,tiddler)

               var sp = createTiddlyElement(place,"span",null,"fontResizer");
               if (params[0])
               createTiddlyButton(sp,"+","increase font-size",this.incFont);
               createTiddlyButton(sp,"=","reset font-size",this.resetFont);
               createTiddlyButton(sp,"–","decrease font-size",this.decFont);

config.macros.fontSize.onDblClick = function (e)
             if (!e) var e = window.event;
             e.cancelBubble = true;
             if (e.stopPropagation) e.stopPropagation();
             return false;

config.macros.fontSize.setFont = function ()
               setStylesheet(".tiddler .viewer {font-size:"+config.options.txtFontSize+"%;}\n","fontResizerStyles");

               if (config.options.txtFontSize < fontSettings.maxSize)
                  config.options.txtFontSize = (config.options.txtFontSize*1)+fontSettings.stepSize;


               if (config.options.txtFontSize > fontSettings.minSize)
                  config.options.txtFontSize = (config.options.txtFontSize*1) - fontSettings.stepSize;



config.paramifiers.font =
               onstart: function(v)
                   config.options.txtFontSize = v;
"Fighting food waste and getting food to the people who need it"

A great variety of products is delivered to several Institutions with different profiles on a regular basis. Given certain restrictions on food supplies, the aim is to improve the channeling of the products to the Institutions in need according to their main characteristics and activity.
Models treat soil surface as effectively planar. However it is well-known that soil surface can be treated as a fractal. It is an open question the extent to which degradation via adsorption of photons into a fractal surface differs from that assuming a planar surface and whether this can explain the inability of models to represent the degradation of some substances.

The global air transport system has changed dramatically following the first scheduled passenger flight on January 1st 1914 from St Petersburg, Florida to Tampa. In 2009 4.76 trillion revenue passenger kilometres were flown on 14,240 passenger aircraft from more than 1000 airports. Many of these journeys were operated with Airbus manufactured aircraft.

During 2010 European air travellers experienced widespread disruption as a result of scolding volcanic eruptions and freezing snow falls. The complex and complicated nature of modern transport systems became exposed as the heart beat of timetables slowed and went out of rhythm. Edwards Lorenz’s butterfly effect was observed... albeit with an aluminium equivalent.

These disruptions to the global air transport system will probably occur again. The ability to understand current timetables and forecast their future evolution would significantly improve the robustness and resilience of daily aircraft operations. It would also allow Airbus to provide their customers with fleet solutions that maximize and sustain their future profitability.

Airlines currently rely on connecting traffic to fill up their aircraft, hub-and-spoke being the most popular business model to ensure traffic capture. This is often characterized as a series of waves arriving at or departing from airports.


Airbus is looking for elegant and simple mathematical solutions to represent the schedules and robustness of the air transport systems of tomorrow.

!!Problem statement

Compute a future hub and spoke network timetable and associated fleet plan that maximizes the airline’s profit under external constraints

!!Future applications

In the future air transport systems will address the demands of air transport growth whilst ensuring the need to meet environmental challenges. Airbus’s Global Market Forecast predicts that the Air Transport System will increase by 153% over the next 20 years. This expansion will put increased pressure on airline timetables.

New concepts of operation and aircraft configurations are being considered to balance these demands. Many of these ideas will impact the scheduling of a transport system due to changes in flight speed, turnaround procedures at airports and increased co-operation between airlines...

What should these ideas look like?... from an air transport scheduling perspective?
How to Develop Smart Positioning Algorithms for Indoor Satellite Navigation?


The European Space Agency (ESA) is an agency for the cooperation among European States in space research, technology and space applications. One of the topics of ESA is Satellite Navigation, being responsible for the construction of the Galileo system and an augmentation-system for GPS. Satellite Navigation has many applications, including route guidance for cars, the navigation of boats and airplanes, and the synchronization of telecom networks.

!Problem description
Today’s positioning algorithms for users applying Satellite Navigation are well known and relatively simple. However, in spite of this large number of applications, Satellite Navigation has a major weakness: the availability inside buildings is rather poor (due to the attenuation of the signal by walls). As a result, alternatives for indoor navigation are a topic of research. One potential solution is to use wireless signals to determine the distance between users.  In the future we could have the following scenario: inside a shopping mall, a large number of users are able to determine the distance to nearby users, whereas a minority of these users is also able to use Satellite Navigation. In addition, all users are able to exchange data, but they would like to compute their position themselves. This leads to the following question: What are the optimal positioning algorithms in this scenario?


Gas portfolio optimisation under uncertainty
Table of contents
1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 PROBLEM DESCRIPTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1 PHYSICAL NETWORK: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3 RETAIL MARKET RC: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
4 SWING PURCHASE CONTRACT: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
5 STORAGE CONTRACT: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
6 HUB SALE: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
7 PROBLEM: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1 Introduction
As a consequence of the liberalisation of the Energy markets in Europe, gas
producers and wholesale participants face a new class of optimisation
problems. Gas is not only purchased on long term contracts, but is also traded
on a growing and increasing important spot market. Thus, long term contracts is
not only used for securing a demand of gas in a monopolic market, but is
actually also used for commercial spot trading. The same considerations apply
for a storage, as not being only used for secure the demand in winter, but may
also be used for commecial trading.
However, the spot markets around in Europe are still not perfect in an economic
sense, and also there is several market areas for which, it is not possible to fully
supply the demand by a spot market (Denmark/Sweden is such an example).
This situation gives rise to some challenging matemathical programs, which the
gas industry is faced with every day. We will here define a problem, which is a
simplification compared to real problems. However, it is our hope, if we can find
a solution to this problem, and the methodology is scaleable, we may extend
the methods to be used for real problems.
2 Problem description
We would like to find a supply plan for day t for the situation:
1. The gas supplier has one complex gas supply contract (see below for
more details).
2. Access to one storage.
3. Obligations to deliver gas to a retail market RC.
4. Limited access to sell gas on a HUB (market place).
The physical network is shown below. The supplier purchases gas on a so
called complex or long-term contract. Through Transmission System Operator 2
(TSO2) it is possible to flow a limited amount of the purchased gas to the HUB
for selling it to a market price. Through Transmission System Operator 1 (TSO)
the supplier may either source the retail market RC, or inject it into the storage.
The storage may also be used for sourcing the retail market.
The characteristica of the purchase contract, storage, retail market and HUB
market will be given below.
3 Retail market RC:
The consumption on the retail market is in this simplified version only
dependent on the temperature. One simple approach for the relationships
between consumption Qt
RC on day t and temperature is to assume the
consumption is normally distributed given temperature,
RC∣T=N−⋅T,2 (1)
The temperature may also be assumed to follow a normal distribution given the
day t in the year as in equation 2.
pT∣t =NC−⋅cos ⋅t−t0 ,2 (2)
t0 is a reference day.
We are obliged to deliver the demanded quantum of gas. If we do not meet the
demand, we can define the penalty cost as
Illustration 1: Physical network
RC= f Qt
UD⋅Qt UD (3)
Where the undelivered quantity Qt
UD is defined as
UD=max 0,Qt
SPC ,1−Qt
WD (4)
and f  x  is 1 if x0 , otherwise 0.
4 Swing Purchase contract:
For the supply contract we have that the purchase Qt
SPC on the contract on
day t may be between Dmin
and Dmax . For a whole year (gas year) the
total purchased amount of gas must be between Y min
and Y max . To
Dmin ≤ Qt
SPC ≤ Dmax
Y min ≤ Σ
t =1
SPC ≤ Y max (5)
As mentioned the purchased gas may be sold at the hub on the market area
TSO2 or transported to the market area TSO1:
SPC ,1Qt
SPC , 2 (6)
SPC ,1 is the quantity going into market area 1, and Qt
SPC ,2 is the quantity
going into market area 2.
Contract price is an indexed Oil price with 3 month time lag and 6 month
reference period and is constant for the month. Thus, the price for month m is
⋅ Σ
oil (7)
If we consider the gas price for April, the average of oil prices Pi
oil is for the
months July to December the previous year.
You may assume here the index oil price follows an stochastic Ornstein-
Uhlenbeck process. But in general we are looking for a solution, where we do
not need to make any assumptions about the distribution of the oil price.
5 Storage Contract:
We assume here the supplier has booked the storage with the properties,
● The daily injection to storage Qt
INJ must be less than injection
capacity CINJ .
● The daily withdrawal from storage Qt
WD must be less than withdrawal
capacity CWD .
● The physical volume on the storage at the end of day t Vt must be
less than the booked storage capacity Vbook
Furthermore we have balance restriction:
Vt=V t−1Qt
WD (8)
For the opening balance at the beginning of the year and closing balance at the
end of year of the storage, we can imagine 2 different cases:
1. The opening and closing balance must equal each other, implying over
time we have a static system.
2. It is possible to valuate the gas storage volumes, and the volumes will
be part of the optimisation problem.
But we would like to discuss the boundary conditions for the storage at the
6 HUB sale:
It is possible to sell at the hub. To simplify the problem assume, the HUB price
HSC follows a modified Ornstein-Uhlenbeck process, in which the mean
value is seasonal dependent. But for a general solution we would prefer a
methodology in which we do not have to make any assumptions about the
distribution of the market price.
The quantity sold at the HUB is given as
SPC , 2 (9)
and the quantity of gas, which can flow to the HUB is limited by a maximum
capacity in the pipeline
SPC ,2≤Fmax (10)
7 Problem:
For day t, how much gas shall we buy on the purchase contract, how much
shall we sell at the HUB, and how should we utilise the storage. That is, we
want to find a production plan for day t, so we maximize the expectation:
t N
RC  (11)
subject to the constraints and conditions mentioned above.
|Author|Eric Shulman|
|Description|view any tiddler by entering it's title - displays list of possible matches|
''View a tiddler by typing its title and pressing //enter//.''  As you type, a list of possible matches is displayed.  You can scroll-and-click (or use arrows+enter) to select/view a tiddler, or press escape to close the listbox to resume typing.  When the listbox is not displayed, pressing //escape// clears the current input.
>see [[GotoPluginInfo]]
*Match titles only after {{twochar{<<option txtIncrementalSearchMin>>}}} or more characters are entered.<br>Use down-arrow to start matching with shorter input.  //Note: This option value is also set/used by [[SearchOptionsPlugin]]//.
*To set the maximum height of the listbox, you can create a tiddler tagged with <<tag systemConfig>>, containing:
config.macros.gotoTiddler.listMaxSize=10;  // change this number
2009.05.22 [1.9.2] use reverseLookup() for IncludePlugin
|please see [[GotoPluginInfo]] for additional revision details|
2006.05.05 [0.0.0] started
version.extensions.GotoPlugin= {major: 1, minor: 9, revision: 2, date: new Date(2009,5,22)};

// automatically tweak shadow SideBarOptions to add <<gotoTiddler>> macro above <<search>>

if (config.options.txtIncrementalSearchMin===undefined) config.options.txtIncrementalSearchMin=3;

config.macros.gotoTiddler= { 
	listMaxSize: 10,
	listHeading: 'Found %0 matching title%1...',
	searchItem: "Search for '%0'...",
	function(place,macroName,params,wikifier,paramString,tiddler) {
		var quiet	=params.contains("quiet");
		var showlist	=params.contains("showlist");
		var search	=params.contains("search");
		params = paramString.parseParams("anon",null,true,false,false);
		var instyle	=getParam(params,"inputstyle","");
		var liststyle	=getParam(params,"liststyle","");
		var filter	=getParam(params,"filter","");
		var html=this.html;
		var keyevent=window.event?"onkeydown":"onkeypress"; // IE event fixup for ESC handling
		if (config.browser.isIE) html=this.IEtableFixup.format([html]);
		var span=createTiddlyElement(place,'span');
		span.innerHTML=html; var form=span.getElementsByTagName("form")[0];
		if (showlist) this.fillList(form.list,'',filter,search,0);
	'<form onsubmit="return false" style="display:inline;margin:0;padding:0">\
		<input name=gotoTiddler type=text autocomplete="off" accesskey="G" style="%instyle%"\
			title="Enter title text... ENTER=goto, SHIFT-ENTER=search for text, DOWN=select from list"\
			onfocus="this.select(); this.setAttribute(\'accesskey\',\'G\');"\
			%keyevent%="return config.macros.gotoTiddler.inputEscKeyHandler(event,this,this.form.list,%search%,%showlist%);"\
			onkeyup="return config.macros.gotoTiddler.inputKeyHandler(event,this,%quiet%,%search%,%showlist%);">\
		<select name=list style="display:%display%;position:%position%;%liststyle%"\
			onchange="if (!this.selectedIndex) this.selectedIndex=1;"\
			%keyevent%="return config.macros.gotoTiddler.selectKeyHandler(event,this,this.form.gotoTiddler,%showlist%);"\
			onclick="return config.macros.gotoTiddler.processItem(this.value,this.form.gotoTiddler,this,%showlist%);">\
		</select><input name="filter" type="hidden" value="%filter%">\
	"<table style='width:100%;display:inline;padding:0;margin:0;border:0;'>\
		<tr style='padding:0;margin:0;border:0;'><td style='padding:0;margin:0;border:0;'>\
	function(list,val,filter) {
		if (!list.cache || !list.cache.length || val.length<=config.options.txtIncrementalSearchMin) {
			// starting new search, fetch and cache list of tiddlers/shadows/tags
			list.cache=new Array();
			if (filter.length) {
				var fn=store.getMatchingTiddlers||store.getTaggedTiddlers;
				var tiddlers=store.sortTiddlers(fn.apply(store,[filter]),'title');
			} else 
				var tiddlers=store.reverseLookup('tags','excludeLists');
			for(var t=0; t<tiddlers.length; t++) list.cache.push(tiddlers[t].title);
			if (!filter.length) {
				for (var t in config.shadowTiddlers) list.cache.pushUnique(t);
				var tags=store.getTags();
				for(var t=0; t<tags.length; t++) list.cache.pushUnique(tags[t][0]);
		var found = [];
		var match=val.toLowerCase();
		for(var i=0; i<list.cache.length; i++)
			if (list.cache[i].toLowerCase().indexOf(match)!=-1) found.push(list.cache[i]);
		return found;
	function(t) {
		if (store.tiddlerExists(t)) return "";  // tiddler
		if (store.isShadowTiddler(t)) return " (shadow)"; // shadow
		return " (tag)"; // tag 
	function(list,val,filter,search,key) {
		if (list.style.display=="none") return; // not visible... do nothing!
		var indent='\xa0\xa0\xa0';
		var found = this.getItems(list,val,filter); // find matching items...
		found.sort(); // alpha by title
		while (list.length > 0) list.options[0]=null; // clear list
		var hdr=this.listHeading.format([found.length,found.length==1?"":"s"]);
		list.options[0]=new Option(hdr,"",false,false);
		for (var t=0; t<found.length; t++) list.options[list.length]=
			new Option(indent+found[t]+this.getItemSuffix(found[t]),found[t],false,false);
		if (search)
			list.options[list.length]=new Option(this.searchItem.format([val]),"*",false,false);
		list.size=(list.length<this.listMaxSize?list.length:this.listMaxSize); // resize list...
	function(ev) { // utility function
		ev.cancelBubble=true; // IE4+
		try{event.keyCode=0;}catch(e){}; // IE5
		if (window.event) ev.returnValue=false; // IE6
		if (ev.preventDefault) ev.preventDefault(); // moz/opera/konqueror
		if (ev.stopPropagation) ev.stopPropagation(); // all
		return false;
	function(event,here,list,search,showlist) {
		if (event.keyCode==27) {
			if (showlist) { // clear input, reset list
			else if (list.style.display=="none") // clear input
			else list.style.display="none"; // hide list
			return this.keyProcessed(event);
		return true; // key bubbles up
	function(event,here,quiet,search,showlist) {
		var key=event.keyCode;
		var list=here.form.list;
		var filter=here.form.filter;
		// non-printing chars bubble up, except for a few:
		if (key<48) switch(key) {
			// backspace=8, enter=13, space=32, up=38, down=40, delete=46
			case 8: case 13: case 32: case 38: case 40: case 46: break; default: return true;
		// blank input... if down/enter... fall through (list all)... else, and hide or reset list
		if (!here.value.length && !(key==40 || key==13)) {
			if (showlist) this.fillList(here.form.list,'',here.form.filter.value,search,0);
			else list.style.display="none";
			return this.keyProcessed(event);
		// hide list if quiet, or below input minimum (and not showlist)
		// non-blank input... enter=show/create tiddler, SHIFT-enter=search for text
		if (key==13 && here.value.length) return this.processItem(event.shiftKey?'*':here.value,here,list,showlist);
		// up or down key, or enter with blank input... shows and moves to list...
		if (key==38 || key==40 || key==13) { list.style.display="block"; list.focus(); }
		return true; // key bubbles up
	function(event,list,editfield,showlist) {
		if (event.keyCode==27) // escape... hide list, move to edit field
			{ editfield.focus(); list.style.display=showlist?'block':'none'; return this.keyProcessed(event); }
		if (event.keyCode==13 && list.value.length) // enter... view selected item
			{ this.processItem(list.value,editfield,list,showlist); return this.keyProcessed(event); }
		return true; // key bubbles up
	function(title,here,list,showlist) {
		if (!title.length) return;
		if (title=="*")	{ story.search(here.value); return false; } // do full-text search
		if (!showlist) here.value=title;
		story.displayTiddler(null,title); // show selected tiddler
		return false;
|Author|Eric Shulman|
|Description|Documentation for GotoPlugin|
''View a tiddler by typing its title and pressing //enter//.''  As you type, a list of possible matches is displayed.  You can scroll-and-click (or use arrows+enter) to select/view a tiddler, or press escape to close the listbox to resume typing.  When the listbox is not displayed, pressing //escape// clears the current input.
syntax: {{{<<gotoTiddler quiet search inputstyle:... liststyle:... filter:...>>}}}
All parameters are optional.
* ''quiet'' (//keyword//)<br>list will not be automatically display as each character is typed.  Use //down// or //enter// to view the list.
* ''showlist'' (//keyword//)<br>list will always be displayed, inline, directly below the input field.
* ''search'' (//keyword//)<br>adds an extra 'command item' to the list that can be used to invoke a full-text search using the entered value.  This can be especially useful when no matching tiddler titles have been found.
* ''inputstyle:'' and ''liststyle:''<br>are CSS declarations that modify the default input and listbox styles, respectively.  Note: the CSS styles must be surrounded by ({{{"..."}}} or {{{'...'}}}) or ({{{[[...]]}}}) (e.g., {{{liststyle:"border:1px dotted blue;color:green;..."}}}.
* ''filter:''<br>is a single tag value (or a boolean tag expression if MatchTagsPlugin is installed), and is used to limit the search to only those tiddlers matching the indicated tag or tag expression (e.g., {{{<<gotoTiddler filter:"faq or help">>}}})
{{{<<gotoTiddler search>>}}}
<<gotoTiddler search>>
{{{<<gotoTiddler showlist filter:"pluginInfo" liststyle:"height:10em;width:auto;">>}}}
<<gotoTiddler showlist filter:"pluginInfo" liststyle:"height:10em;width:auto;">>
*Match titles only after {{twochar{<<option txtIncrementalSearchMin>>}}} or more characters are entered.<br>Use down-arrow to start matching with shorter input.  //Note: This option value is also set/used by [[SearchOptionsPlugin]]//.
*To set the maximum height of the listbox, you can create a tiddler tagged with <<tag systemConfig>>, containing:
config.macros.gotoTiddler.listMaxSize=10;  // change this number
2009.05.22 1.9.2 use reverseLookup() for IncludePlugin
2009.04.12 1.9.1 support multiple instances with different filters by using per-element tiddler cache instead of shared static cache
2009.04.05 1.9.0 added 'showlist' parameter for inline display with listbox always visible.
2009.03.23 1.8.0 added txtIncrementalSearchMin (default=3).  Avoids fetching long lists.  Use down arrow to force search with short input.
2008.12.15 1.7.1 up arrow from input field now moves to end of droplist (search for input).  Also, shift+enter cam now be used to quickly invoke search for text.
2008.10.16 1.7.0 in macro handler(), changed to use //named// params instead of positional params, and added optional "filter:" param for tag filtering.  Removed 'insert' handling (now provided by [[QuickEditPlugin]]).
2008.10.02 1.6.1 for IE, wrap controls in a table.  Corrects placement of listbox so it is below input field.
2008.10.02 1.6.0 added 'search' param for optional "Search for:" item that invokes full text search (especially useful when no title matches are found)
2008.02.17 1.5.0 ENTER key always displays tiddler based on current input regardless of whether input matches any existing tiddler
2007.10.31 1.4.3 removed extra trailing comma on last property of config.macros.gotoTiddler object.  This fixes an error under InternetExplorer that was introduced 6 days ago... sure, I should have found it sooner, but... WHY DON'T PEOPLE TELL ME WHEN THINGS ARE BROKEN!!!!
2007.10.25 1.4.2 added onclick handler for input field, so that clicking in field hides the listbox.
2007.10.25 1.4.1 re-wrote getItems() to cache list of tiddlers/shadows/tags and use case-folded simple text match instead of regular expression to find matching tiddlers.  This *vastly* reduces processing overhead between keystrokes, especially for documents with many (>1000) tiddlers.  Also, removed local definition of replaceSelection(), now supported directly by the TW2.2+ core, as well as via backward-compatible plugin
2007.04.25 1.4.0 renamed macro from "goto" to "gotoTiddler".  This was necessary to avoid a fatal syntax error in Opera (and other browsers) that require strict adherence to ECMAScript 1.5 standards which defines the identifier "goto" as "reserved for FUTURE USE"... *sigh*
2007.04.21 1.3.2 in html definition, removed DIV around droplist (see 1.2.6 below).  It created more layout problems then it solved. :-(
2007.04.01 1.3.1 in processItem(), ensure that correct textarea field is found by checking for edit=="text" attribute
2007.03.30 1.3.0 tweak SideBarOptions shadow to automatically add {{{<<goto>>}}} when using default sidebar content
2007.03.30 1.2.6 in html definition, added DIV around droplist to fix IE problem where list appears next to input field instead of below it.  
2007.03.28 1.2.5 in processItem(), set focus to text area before setting selection (needed for IE to get correct selection 'range')
2007.03.28 1.2.4 added prompt for 'pretty text' when inserting a link into tiddler content
2007.03.28 1.2.3 added local copy of core replaceSelection() and modified for different replace logic
2007.03.27 1.2.2 in processItem(), use story.getTiddlerField() to retrieve textarea control
2007.03.26 1.2.1 in html, use either 'onkeydown' (IE) or 'onkeypress' (Moz) event to process <esc> key sooner, to prevent <esc> from 'bubbling up' to the tiddler (which will close the current editor).
2007.03.26 1.2.0 added support for optional "insert" keyword param.
2006.05.10 1.1.2 when filling listbox, set selection to 'heading' item... auto-select first tiddler title when down/enter moves focus into listbox
2006.05.08 1.1.1 added accesskey ("G") to input field html (also set when field gets focus).  Also, inputKeyHandler() skips non-printing/non-editing keys. 
2006.05.08 1.1.0 added heading to listbox for better feedback (also avoids problems with 1-line droplist)
2006.05.07 1.0.0 list matches against tiddlers/shadows/tags.  input field auto-completion... 1st enter=complete matching input (or show list)... 2nd enter=view tiddler.  "quiet" param controls when listbox appears.  handling for enter (13), escape(27), and down(40) keys.   Change 'ondblclick' to 'onclick' to avoid unintended triggering of tiddler editor).  Shadow titles inserted into list instead of appended to the end.
2006.05.05 0.0.0 started
Analog Devices

A problem with keyboards on touchscreen phones is the lack of tactile feedback when a key is pressed. This makes the experience of interacting with a keyboard on a touchscreen phone different to interacting with a real keyboard, for instance reducing typing accuracy and making touch typing without visible confirmation of a valid touch impossible. One way of mimicking the feeling of a keyboard is to set up a localised vibration of the screen at the site of the key. Such a vibration must have an amplitude of about 30 microns to be perceptible and must be localised to within one or two millimetres (the size of a key on a touchscreen phone). The group is asked to investigate how this can be achieved by placing an array of transducers around the edge of the screen. The screen itself is a thin (less than a millimetre thick) glass sheet. Specifically, what waveforms must be applied to the transducers in order to generate localised vibrations?

Important considerations are:

What is the minimum number of transducers needed to achieve this effect and what are their optimal locations? What constraints are there on the boundary conditions at the edge of the screen? (Ideally the screen should be clamped at the edges to prevent dust infiltration.) Can the screen be modelled in the linear elastic limit? (Important if the principle of superposition is to be used for multiple key presses.) How sensitive are the results to changes in the normal modes of the screen due to contact with one or more fingers?
These are two independent problems in the same problem domain.

!!Problem 1

Given a weighted bipartite graph, the optimal maximum weighted matching can be found using Edmond's Algorithm. In order to apply this algorithm to the problem domain of interest, an efficient hardware implementation is required in order to achieve the required throughput, since processing in excess of 100 graphs of 1000 nodes per second is not achievable on current conventional processors. A Boltzmann machine provides a suitable approximation for solving the problem using a parallel processing array, but it too is not suitable for an FPGA implementation due to resource requirements. Therefore, an alternative parallel implementation for solving the maximum weighted matching problem (or providing a good approximation) is sought that can be realistically implemented on an FPGA or microprocessor. A suitable algorithm needs to have the following qualities:
## Must be able to be split into blocks of multiple parallel processes.
## Must be deterministic.
## Must run in a fixed time possibly dependent on the number of nodes but independent of the weights.

!!Problem 2

The input graph in the problem domain is transformed into a bipartite graph suitable for solving using Edmond's algorithm using the following transform:
Split each vertex into two new vertices `(v, 0)` and `(v, 1)` and replace each directed edge `(u, v)` with undirected edges $((u, 0), (v, 1))$.

When using Edmonds algorithm to solve this maximum weighted matching problem, the weights between nodes are assumed to be independent. However, in the problem domain of interest, this is not always the case and up to at least the next 32 weights per node may be present, dependent on the previous matching:
This problem is NP hard. A greedy algorithm can be used to extract the largest pairs of weights and then the resultant bipartite graph can be solved with Edmond's algorithm. However, a more accurate approximation is sought.
Current is supplied to a Søderberg electrode by copper clamps that are attached to the steel casing around the electrode. From this casing steel fins run into the paste to enhance current transport, creating a non-uniform distribution of current and temperature within the paste.

The aim for the Study Group is to determine whether a homogneisation approach can be used to reduce the problem to one with axial symmetry. 
Extrusion of aluminum is an efficient manufacturing process which allows continuous production. The heated billet (aluminum material) is pushed through a metal die to produce the desired profile. A long continuous production increases aging of the die and hampers its capability to yield homogeneously shaped profiles. Hence the dies are usually removed before their breaking point and only go back into production after receiving a layer coat for protection of the metal. The main goal here is to find the optimal life cycle for a die.
The idea for the project is simple - if any 3D body is given, how can it be built with LEGO bricks?

Unit-volume (the smallest possible volume) in the LEGO universe is a so-called "generic LEGO brick". It is a brick 8mm long and wide, and 3.2mm high, and has only one position ("stud") for connecting with other bricks. Although there are a lot of different LEGO bricks, in this project the use of "family" bricks only is allowed. "Family" LEGO bricks are paralelopiped-shape bricks that can be made of "generic" LEGO bricks, by putting several of them next to each other and/or above each other.

Allowed dimensions of "family" LEGO bricks are: (if the dimensions of the generic LEGO brick are set to 1,1,1)


Please note that some bricks appear in different heights. Length and width of bricks correspond to their number of studs (connection points) in the horizontal plane.

The usage of LEGO DUPLO bricks is allowed also. Dimensions of DUPLO bricks, in "generic brick" measure, are:


There is one important restriction here: Due to construction, DUPLO bricks can be connected only to family bricks with an even number of generic bricks in the length and width dimensions, and only with the bricks of height at least 3.

So the task is:

For a "legoized" 3D model (i.e. a 3D model that is represented as a set of 1x1x1 generic LEGO bricks put next to each other or above each other), find an algorithm to build the model of actual LEGO bricks from the previous tables, so that model should stand connected. We assume that a brick is connected to a model if at least two of its connection points are connected to a model. (If the brick has less than three connection points, then it is enough that only one of its connection points is connected to a model).

We do not want a model to be solid, meaning "full of bricks". Whenever it is possible to make an invisible hole inside a model, we would like it to be done - to spend less bricks in building. But all the bricks should be connected to the model as described earlier. A good "rule of thumb" should be that the width of the "wall" from outside to inside of the model is 4 connection points (it can be more or less in some places, the shape of the inside hole is not important).

We can assume nothing about the shape of a legoized object except that it is connected. It can have any number of holes into it (a cup with two handles, for example).

The preferred output of this project should be an algorithm for making a computer program for determining which brick should be put in which place. One of the problems we can see now is that there is not a unique solution - i.e. every model can be built using many different bricks. A criterion that "more solid" models are preferred can be used to set some cost-function. "More solid" models are models made with bigger bricks, and with bricks that have more connection points connected (and even these two criterions can be opposite!).

I have tried to explain here a problem as it appears in "real life". I hope this is enough for you to make an exact mathematical formulation of the problem (although I am aware that, as with all "real life" problems, some terms and requirements are not defined quite precisely - we require that the outside of the final model is exactly as in the legoized model, but we do not have such precise requests about the insides of the model, not even the precise thickness of the "wall" - but, these are facts of life!)
{{c{The broek-system of the carillon in the museum}}}

A carillon consists of approximately 20 up to 45 bells. These bells hang in an open //lantern-tower// (like the //Munttoren// in Amsterdam)
or inside a tower under the roof. A carillon is played using a keyboard that is generally located one story below the bells. A wire connects
each clapper to the keyboard. The oldest and most well-known way to construct such a wire-connection is by using the so-called //broek-system//.
This broek-system has a simple structure, as can be seen in the following sketch.
{{c{Picture of a broek-system}}}
The ring in the middle is called the //broek-ring// and to this ring three steel-wires are attached. The wire going downwards connects the broek-ring to a key on the keyboard, and is called the //key-wire//. The wire pointing upwards is called the broek-wire and is attached to a fixed point and the wire connecting the clapper with the broek-ring is called the //clapper-wire//. By playing a key on the keyboard, the key-wire pulls the broek-ring down, and the clapper strikes the bell. Using this kind of wire-construction ''each and every'' bell of the carillon needs to be connected to the keyboard. From the photo above (of the carillon in the museum) it is hopefully clear that this is not an easy task. Especially not because there are quite a few requirements that an ideal //broek// has to meet. For one, the wires should not be too close to eachother because pulling on them will make them swing a little. Also, the proportions of the lengths of the wires and of the angles between the wires are of crucial importance for the carillon to play well and regular. For each bell there is an optimal ratio of the angles between the wires.
{{c{The carillon of the church in Monnickendam}}}

!The problem
Until now, a carillon-builder places the bells and the wires of the carillon  in a tower by using only his experience. Furthermore, he will try to put the broek-system up in a geometrically balanced way. However, because of the constaints that must be satisfied this will not be possible for all the bells (or at least it will be extremely hard to do) and will lead to problems when connecting the last bells with the keyboard.

The problem to be addressed at the study group is to design a system that tells the carillon-builder how to put the bells and wires (using the broek-system) in a tower such that
# the wires have sufficient distance to each other
# the key-board plays as regular as possible (every key should give a similar and not too large amount of counter pressure)
# the path the clapper moves is optimal (not too small, because that would lead to a feeble sound, not too large leading to a deformed sound)
{{c{Ornament (Bell-founder Eijsbouts) This decoration has been designed by the sculptor Niel Steenbergen. It represents the three commemorations of Epiphany (6 January): the appearance of the Lord, the christening in the river Jordan and the wedding at Cana.}}}
* 2004, Dec 6-10: Bombay (India). [[SGMIP 2004|SGMIP 2004]]
* 2009, Mar 16-21: Roorkee (India). [[SGMIP 2009|SGMIP 2009]]
* 2011, Mar 21-26: Bangalore (India). [[SGMIP 2011|SGMIP 2011]]
* 2008, Jan 14-19: Guanajuato (Mexico). [[SPI 1|SPI 1]]
* 2009, Jan 12-16: Guanajuato (Mexico). [[SPI 2|SPI 2]]
* 2010, Jan 18-23: Guanajuato (Mexico). [[SPI 3|SPI 3]]
* 2011, Jan 17-22: Guanajuato (Mexico). [[SPI 4|SPI 4]]
The Engineering & Tooling sector may be said to be of the "single product" kind, i.e. each project is dedicated to one singular product with specific proprieties in terms of design, material, features, quantities, and market, among others. Therefore, each project needs a preliminary phase of research and development before entering the engineering and production phases.

Moreover, since this procedure is standard for this sector, then the research and development phase belongs to the core business of the companies, and thus the investment spent in this phase is not accounted as R&D investment, but as an operational cost.

This problem goals are to understand the following points:
# How can a traditional Engineering & Tooling company reflect this investment on its official accounts: Balance & Income statement.
# What is the most appropriate cost system for the companies in this sector.

The pharmaceutical industry screens many thousands of compounds against a disease target to help select promising candidates for new medicines. This screening can typically take two forms, either by determining the potency of a compound by measuring a response against the target at a number of compound concentrations, or by measuring the response at a single compound concentration only. Clearly the latter method has advantages of increased speed and reduced cost. However, it has been observed that the correlation between the two methods can be very poor, i.e. compounds with a high potency in the first method do not necessarily have a high response in the second method. The impact of this poor correlation rate can be significant, both in terms of decision-making and financial costs. For example, when screening using a single concentration a false negative response can result in potential chemical targets being missed; and a false positive response can result in wasted consumable costs and time following up a compound that is not of interest. This is an industry-wide problem and so far investigations have not revealed the cause of the poor correlation.

!!Statement of Problem

Investigate whether there is a characteristic signature in the response vs. concentration results from multi-concentration tests that would predict a poor correlation with the single-concentration results.


We are able to make available data from both screening methods for a number of disease targets.

The HTA data consists of a single percentage effect measured at a single concentration from a plate. For any particular compound there may be 1 or 2 HTA percentage effects and rarely a few more. The IC50 data consists of percentage effects for a series of 11 concentrations, sometimes with a single result at each concentration and sometimes with duplicate results at each concentration. These are from the same plate. For any particular compound there may be several sets of such data (from different plates run at different times). One or two of the compounds are standard compounds that appear on every plate (and are used for quality control purposes) and so these can appear a large number of times.

The concentration used for the HTA data should be close to one of the larger concentrations used in the IC50 series.

In order to match the data from the HTA and IC50 experiments every HTA result appears in the dataset with every IC50 result. For example, suppose that there are 2 HTA results and 2 IC50 series with duplicates at each concentration. Each IC50 series will have 11 concentrations x 2 repeats = 22 results. These 22 results will be matched with each of the 2 HTA results, giving 44 results. This is then repeated for the other IC50 series, giving 88 results for this compound in the dataset.

Each plate (whether from an HTA experiment or a IC50 experiment) contains a number of control wells designed to yield a maximum response (signal) or a minimum response (signal). The results for each control are averaged and these mean results used to adjust the sample compound responses to a percentage effect.
$$%effect = 100 \times \frac{mean(maxs) - signal}{mean(maxs) - mean(mins)}$$
!!Data layout

| ''Column Name'' | ''Example of Data'' | ''Description'' |h
|COMPOUND_NUMBER|PF-00847074-00|Compound indentifier|
|HTA_KEY|6013678240|Unique identifier for a HTA plate (and thus a HTA result)|
|HTA_CONC|3.00E-04|Concentration of HTA compound|
|HTA_PCTEFFECT|98.19295463|Percentage effect result for HTA|
|IC50_KEY|6014275837|Unique identifier for an IC50 plate (and thus all the results making up an IC50 curve have the same identifier)|
|IC50_CONC|3.00E-08|Concentration of IC50 compound at a position on the curve|
|IC50_PCTEFFECT|-15.63411|Percentage effect result for IC50 curve at concentration above|
|IC50_OPERATOR|==|IC50 in conc range, > IC50 above top conc, < IC50 below bottom concentration|
|IC50_M|1.42E-04|IC50 determined from curve fit|
|ic50_CV|8.608383775|Coefficient of variation for IC50 estimate|
|ic50_slope|2.99579413|Slope of Ic50 curve at the IC50 concentration|
|slope_se|0.5186747|Standard error of slope estimate|
|ic50_MIN|-6.245449845|Lower asymptote from curve fit|
|ic50_MAX|100|Upper asymptote from curve fit|
|IC50_CURVE_CLASS|IS (Incomplete S-curve)|Internal classification giving and indication of completeness of curve<br>FS, FSP, FSS - full sigmoid curve<br>IS - Incomplete Sigmoid curve<br>NS - No sigmoid curve (flat response)<br>ND - Noisy Data|
|IC50_R_SQUARED|95.09548406|R2 statistic from curve fit|
|IC50_DF|3|Indicator of user intervention in curve fit<br>4 - no parameters fixed<br>3 - 1 parameter fixed<br>2 - 2 parameters fixed<br>1 - 3 parameters fixed<br>Can usually tell which parameters have been fixed by looking to see if Max = 100, Min = 0 or Slope = 1|
|HTA_SCREEN|S8992D|HTA Screen number (constant for whole screen)|
|HTA_RUN|lazari_o_2009-02-11_00701218_HTA_FRAG|HTA run identifier (a number of plates run on the same occasion make up a run)|
|HTA_START_DATE|Wed Feb 11 00:00:00 2009|Date of HTA run|
|HTA_PLATE_BARCODE|SDAA00053880|Barcode used to track plates|
|HTA_WELL_NUM|311|Position of HTA compound on a 384 well plate (starting at 1 in top left corner and running from left to right). Some well numbers do not appear as they are reserved for the controls.|
|HTA_BATCH|PF-00847074-00-0003|Batch of compound used in this experiment (will be the same for a number of runs)|
|hta_batch_source|STEK1|The store where the compound is kept|
|hta_batch_date|10-Feb-09|Date batch was put on store|
|IC50_SCREEN|S9004E|IC50 Screen number (constant for whole screen)|
|IC50_RUN|2009-02-13_VBN00700243_Farrow_I_Nav1.7; DMSO Fragment IC50|IC50 run identifier (a number of plates run on the same occasion make up a run)|
|IC50_PLATE_BARCODE|SDAA00051894|Barcode used to track plates|
|IC50_WELL_NUM|213|Position of this conc of IC50 compound on a 384 well plate (starting at 1 in top left corner and running from left to right). Some well numbers do not appear as they are reserved for the controls.|
|IC50_BATCH|PF-00847074-00-0003|Batch of compound used in this experiment (will be the same for a number of runs)|

Missing values coded as "not found"
To be consistent the following transformations need to be performed for the Integrase, Progesterone and NaV1.8 screens: IC50_SLOPE = - IC50_SLOPE

!!Further details

The screening tests do not have simply a yes/no outcome: there is a quantitative measure of activity at each concentration used. This is usually transformed into a percentage effect using the means of high controls and of low controls on the same assay plate which determine a maximum and minimum activity level. There are theoretical and empirical models for the potency as a function of concentration. For the experiments across a number of concentrations, a 4-parameter logistic model is fitted to the `%response` vs. `log_{10}(concentration)` relationship. This model does have a theoretical basis (linked to the law of mass action) but it does assume a number of conditions that may or may not apply for individual curves corresponding to particular compounds. Indeed it may be a breakdown in this supposed relationship that corresponds to compounds that have a poor correlation between the two methods of screening.

The random errors in the outcome of the screening tests are assumed to be normally distributed. However, there tend to be a number of outliers (or results that are too extreme to be explained by a normal distribution). Some of these outliers are very extreme. It is also assumed that the variance is constant regardless of the magnitude of the response. In some cases this is clearly not true and the variance increases with the size of the underlying raw measurement.

There are various potential causes of non-independence between test results. Some repeat measurements may be from the same assay run whilst others will be across different assay runs -- measurements from the same run are likely to be more similar than those from different runs. This seems to be the factor most easily identifiable as causing non-independence. Other factors may be Operator, Batch of cells, Batch of reagent etc
In many cases accountants make their judgment on a financial report based on samples taking at random. An approval is issued when the data in the report satisfies a given reliability interval (often a reliability of 95% is used), i.e., the errors found in the samples is not exceeding a certain critical value. In reality quite sometimes corrections are made when the reliability interval is exceeded. Often this type of corrections is carried out with respect to an isolated part in the financial report. After the correction the errors found in those part are now left out of consideration in the total counts of errors. Regarding this way of (ac)counting, there is now a dispute between the Rekenkamer and the auditing companies. The discussion is until now undecided, but it is a very relevant one.
* 2011, Jan 23-26: Thuwal (Saudi Arabia). [[KSG 1|KSG 1]]
[[Dr Kamel Bentahar|http://people.maths.ox.ac.uk/~bentahar/]] (Technology Translator for [[OCCAM|http://www.maths.ox.ac.uk/groups/occam]])
<<slider chkSliderOptionsPanel OptionsPanel 'options »' 'Change TiddlyWiki advanced options'>>
<<newTiddler title:"" label:"new tiddler" focus:label text:"!

$\diagup X\ind Y \tilde{A} \varnothing \R$

$A = \left(\begin{array}{c c c}
1-x & 0 & 0 \\ 0 & 1-x & 0 \\ 0 & 0 & 1-x \end{array}\right)$

$\sum a_i + \sum_{i=0}^{+\infty} b_i$

$f(x) = \left\{\begin{array}{l l}
x^2 \sin \frac1x & \textrm{if } x \ne 0, \\
0 & \textrm{if } x = 0 .

$\displaystyle{ \lim_{x\to\infty} f(x) = k \choose r + \frac ab \sum_{n=1}^{+\infty} a_n + \left\{ \frac{1}{13} \sum_{n=1}^{+\infty} b_n \right\} }$
For the purification of water hollow fiber membranes are increasingly being used. These are porous reeds with a diameter of between 1 to 2 mm, made by a particular type of plastic. The 'wall' of these reeds has a fine porous structure that is permeable for water, but not for particles of a certain sizes. The size of the pores which determines the permeability is in the order of 10 to 500 nanometer (0.01 - 0.5 µm).

A filtration element (a module) comprises several thousands of these reeds glued together and insides a tube. It is of great importance that all reeds in a module are free of defect. A leakage of  the reeds can cause troubled product (particles will leak through these imperfections).

Leakage in the reeds can be caused either during the production process or by  the handling during the fabrication of the modules. The size of the defects that we want to detect are in the order of 2 - 50 µm. Currently, leakages are detected during the quality control  by putting a module under water and putting air pressure on the outside of the reeds. When there is a leakage in a reeds bubbles will arise from the inside of a reeds The disadvantage of this leakage detection method is that the modules become wet, and they must be dried afterwards before shipped to the users. Furthermore, small leakage (<5 µm) can often not be detected because the amount of air slipping through the wall into the reeds is too small to be observed.

We are looking for an improved method, preferably a dry technique (e.g., using acoustical waves), for the detection of leakage in the reeds. To begin with, one can consider a single reed in order to demonstrate the basic ideas. Eventually a method for the analysis/detection at the module level is preferred. To this end, application of any existing principle from the physics is permitted provided that there are no significant risks to the operators. It should be clear that it must be a non-destructive method.

!!Input parameters
* temperature interval of a module: 0 - 50 °C
* Pressure interval:
** From inside to outside: 15 bar
** From outside to inside: 10 bar
** (preference: maximal 3 bar)
* Inner-diameter of membrane: 800 µm
* Outer-diameter of membrane: 1300 µm
* Wall Thickness: 250 µm
* Variance in diameters: +/- 50 µm
* Porosity of the wall: 50-80%
* Membrane-pore size: 0.01 - 0.5 µm

In saw milling industry logs are sorted to fit needed products as good as possible. Usually logs are sorted in equal, roughly 10mm top diameter intervals. Planned sets for sawing are fit to those intervals. This is not at all the best way to gain yield. Better way would be to search those borders for the intervals that fit best for the present sales situation.

!The basic problem

How to determine the sorting borders in order to maximize the profit in each production planning period.

!The targets

To choose the modelling method so, that:
* We can have the non-linear part of sorting the logs (integer model or other suitable method) and the linear part of other functions of the sawmill in one model.
* We can generate the model from all suitable initial data and a good solution (not necessarily optimal) can be found in a practical time when the model is feasible. It should be possible to add new parts into the model in the future (e.g. to model other parts of the sawmill more precisely).

To formulate the model so, that:
* The model describes the log sorting and other parts of the sawmill accurately enough that the model is practical to be used in planning in sawmill industry.
* The model can used in different ways in planning, e.g. to maximize profit, to minimize the log usage in certain conditions or to minimize the amount of side products (bark dust, saw dust).
* The model can be used to make sensitivity analyses of how the sawmill works, e.g. where are the tight spots in the production.

It would also be desirable to find out a suitable commercial solver for the model so, that:
* Model generation and solving can be done in a reasonable time (max one hour).
* The cost of the solver is not too high.

Our estimation of the maximum model size is:
* 30 000 variables
* some thousends restrictions
Our challenge is to answer the following question:
What is the optimal stock level (measured in production days*) for the company’s Sales Plan? This problem leads to these two sub questions:
* What is the optimal production batch for the current mix of the annual Sales Plan?
* How many job changes should the current sales plan require considering the existing Production Structure?

(*) production days are (average stock of last 12 months/production of the last 12 months) x 365
* 2010, Jul 6-8: Annaba (Algeria). [[MSGMI Study Group with the Steel Industry|MSGMI Study Group with the Steel Industry]]
<<fontSize font-size: >>
!!Find in titles
<<gotoTiddler search inputstyle:"border:1px solid grey;color:red;background:#eee;" liststyle:"border:1px dotted blue;color:green;background:#eee;">>

!!Study Groups series
!!!(1) Industrial
* Australia/New Zealand [[MISG|Australian and New Zealand Mathematics in Industry Study Groups]]
* Canada [[IPSW|Canadian Industrial Problem Solving Workshops]]
* Canada (Fields-MITACS) [[FM-IPSW|Fields-MITACS Industrial Problem Solving Workshops]]
* Canada (Montreal) [[M-IPSW|Montreal Industrial Problem Solving Workshops]]
* Europe [[ESGI|European Study Groups with Industry]]
* Hong Kong [[WIA|Workshops on Industrial Applications]]
* India [[SGMIP|Indo-UK Study Group Meetings on Industrial Problems]]
* Malaysia [[M-MISG|Malaysian Mathematics in Industry Study Groups]]
* Mexico [[SPI|Industrial Problem Solving Workshop]]
* Russia [[RSGI|Russian Study Group with Industry]]
* Saudi Arabia [[KSG|KAUST Study Groups in Mathematics for Industry]]
* South Africa [[MISGSA|Mathematics in Industry Study Groups in South Africa]]
* North Western Africa [[MSGMI|Maghreb Study Groups in Mathematics for Industry]]
* Turkey [[EASGI|Euro-Asian Study Groups with Industry]]
* UK [[MMSG|Mathematics in Medicine Study Groups]]
* USA [[Claremont|Claremont Colleges Math-in-Industry Workshops]]
* USA [[MPI|American Annual Workshops on Mathematical Problems in Industry]]
* [[Other|Other Study Groups]]
!!!(2) Biomedical
* UK [[UK-MMSG|UK Mathematics in Medicine Study Groups]]
* UK [[MPSSG|Mathematics in the Plant Sciences Study Groups]]
* UK [[VPH|Virtual Physiological Human]]

Back to the [[MIIS website|http://www.maths-in-industry.org/]]
* 2011, Mar 14-18: Johor Bahru (Malaysia). [[M-MISG 1|M-MISG 1]]
<link rel='alternate' type='application/rss+xml' title='RSS' href='index.xml' />
<script type="text/javascript" src="ASCIIMathML.js"></script>
<script type="text/javascript">translateOnLoad = false;</script>
By: Max Hansen and Karsten Matthiesen Danfoss A/S.

The problem is to assess the feasibility of the below proposed method.

A tube is bend and secured to a common block in a manner shown in Figure 1.

{{c{Figure 1}}}

Beakers are filled with fluid from a supply tank.

The valve opens when an empty beaker is under the outlet. Since the fluid in the tubes were at rest before opening the valve, the fluid must gain momentum until it reaches a equilibrium between head in the tank and the drag in the tubes.

The change in momentum creates forces on the tube. The forces are measured with four stain gauges glued to the tubes. The voltage from the strain gauge bridge is amplified.

The amplified signal is supplied to an integrator and the result is assumed to be proportional to flow. The flow is integrated and the result is the amount of fluid delivered to the beaker.

When the amount is equal to wanted amount in the beakers the valve is closed. The integrators are reset while the valve is closed and the flow is known to be zero.

* The relation between the weights of fluid delivered to the beakers and the signal from the bridge.
* The relation between head in tank and signal from the strain-gauge bridge just after opening the valve.
* The error due to the drag in the tubes.
* The error due to changing temperature and density during filling.
* The error due to offset drift in amplifier and strain gauges.
* The error due to changing pressure while the valve opens.
* The error due to loop constantly rotating around vertical axis.
* Other errors not thought of yet.

Karsten Matthiesen
Max Hansen
6 July 2002.
Danfoss Industrial Controls:

To set up
# an approximate but in practice usable method to calculate the bandwidth in stationary operation and
# a description of stages in possible start-up situations.

The "Puls-Snubber" (see Figure 1) consists of a hole about ø0.3x0.5. At the input side is the media pressure and the output side looks into the "dead volume" about 1500 $mm^3$. The media can be gas or liquid, for instance air or hydraulic oil (32 cSt by 20C). The pressure in the dead volume is mea- sured by an incompressible pressure sensor with infinite band-width. The housing can be considered incompressible compared to the bulk-modulus of the media. The media in the dead volume can by start-up be anything between air, a mixture or liquid. The air in stationary operation will partly be dissolved in and partly removed by the initial liquid transport through the nozzle.

The function of the nozzle is to prevent cavitations in the dead volume which can damage the thin sealing diaphragm. The band-width by the pressure measurement should be reduced as little as possible.

{{c{Figure 1: MBS3250 with puls snubber}}}

[[Data Sheet #1 (PDF)|p/esgi/47/mbs3250.pdf]]
[[Data Sheet #2 (PDF)|p/esgi/47/pressure_pulse.pdf]]

[[The problem statement in PDF|p/esgi/47/project2003_control.pdf]]
During the past year and a half, Odense Steelshipyard has been working on a paint project, where the goal is to replace expensive decals with robot painting. For this purpose a new paint spray gun has been developed (a decal gun). This spray gun makes it possible to paint stripes with very sharp edges.

The biggest difference between this new spray gun and a traditional air-mix spray gun, is the geometry of the nozzle from which the paint ejects.
In a traditional spray gun, the paint is ejected through a small hole, either circular or elliptic, with dimensions around 0.5-1.0 mm. In the new spray gun, this hole is replaced by a slit with a width of 0.2 mm and a length of 20-100 mm. The length of the slit determines how broad a stripe the spray gun can paint.

In order to make an off-line robot programming system, which can simulate the distribution of paint on the surface, it is necessary to have a mathematical model of the flux distribution of the spray gun. There exist models of the traditional spray gun, but not of the new decal spray gun, so the problem consists of developing such a mathematical model for the new spray gun.
The project was proposed by a public sector institution. The main idea is to develop techniques and methodology to assure privacy protection in publicly available statistical databases.
* 2004, Jan 19-23: Johannesburg (South Africa). [[MISGSA 1|MISGSA 1]]
* 2005, Jan 24-28: Johannesburg (South Africa). [[MISGSA 2|MISGSA 2]]
* 2006, Jan 23-27: Johannesburg (South Africa). [[MISGSA 3|MISGSA 3]]
* 2007, Jan 29- Feb2: Johannesburg (South Africa). [[MISGSA 4|MISGSA 4]]
* 2008, Jan 28- Feb1: Johannesburg (South Africa). [[MISGSA 5|MISGSA 5]]
* 2009, Jan 26-30: Johannesburg (South Africa). [[MISGSA 6|MISGSA 6]]
* 2010, Jan 11-15: Cape Town (South Africa). [[MISGSA 7|MISGSA 7]]
* 2011, Jan 10-14: Johannesburg (South Africa). [[MISGSA 8|MISGSA 8]]
* [[9th|http://www.imperial.ac.uk/maths-in-medicine]], Imperial College London, UK, Sep 7–11, 2009.
* [[8th|http://www.maths-in-medicine.org/uk/2008/]], U. of Loughborough, Sep 15–19, 2008.
* [[7th|http://www.maths-in-medicine.org/uk/2007/]], U. of Southampton, Sep 10–14, 2007.
* [[6th|http://www.maths-in-medicine.org/uk/2006/]], U. of Nottingham, Sep 11–15, 2006.
* [[5th|http://www.maths-in-medicine.org/uk/2005/]], U. of Oxford, Sep 12–16, 2005.
* [[4th|http://www.maths-in-medicine.org/uk/2004/]], U. of Strathclyde, Sep 13–17, 2004.
* [[3rd|http://www.maths-in-medicine.org/uk/2002/]], U. of Nottingham, Sep 9–13, 2002.
* [[2nd|http://www.maths-in-medicine.org/uk/2001/]], U. of Nottingham, Sep 10–14, 2001.
* [[1st|http://www.maths-in-medicine.org/uk/2000/]], U. of Nottingham, Sep 11–15, 2000.
* [[3rd|http://www.cpib.ac.uk/2009/the-third-mathematics-in-the-plant-sciences-study-group/]], Nottingham, Dec 14–17, 2009.
* [[2nd|http://www.cpib.ac.uk/2009/problems-and-reports-from-the-second-mppsg/]], Nottingham, Jan 5–8, 2009.
* [[1st|http://www.cpib.ac.uk/2009/problems-and-reports-from-the-inaugural-mpssg/]], Nottingham, Dec 17–20, 2007, [[Reports|http://www.maths-in-industry.org/miis/view/studygroups/mpssg1/]].
* 2007, Dec 17-20: Nottingham (UK). [[MPSSG 1|MPSSG 1]]
* 2009, Jan 5-8: Nottingham (UK). [[MPSSG 2|MPSSG 2]]
* 2009, Dec 14-17: Nottingham (UK). [[MPSSG 3|MPSSG 3]]
* 2011, Jan 4-7: Nottingham (UK). [[MPSSG 4|MPSSG 4]]
Susceptors are food containers made to absorb electromagnetic energy which heat up and brown the food. Unfortunately components of the plastics on the containers migrate into the food.

The questions for the Study Group are of estimating the temperature of the receptors and also in improving the uniformity of the micro-wave heating.

<h2 align="center">Mixing in the Downward Displacement of a Turbulent 
Wash by a Laminar Spacer or Cement Slurry</h2>
<p align="center">Ian Frigaard and Giuliano Sona, Schlumberger Dowell</p>

<h4>Cementing overview:</h4>

After drilling successive stages of an oil well, the drill pipe is removed from the hole and a steel casing or 
liner is run into the bottom hole. The steel tube leaves an annular gap between itself and the rock formation. 
The drilling mud, which initially occupies both inside and outside of the steel tube, is displaced by pumping 
a sequence of fluids down the inside of the tube from surface and returning up towards the surface in the 
annulus. This leaves the liquid cement in the desired position, where it sets and forms a good hydraulic seal 
with the rock formation. Thus, the finished well consists of a telescopic sequence of cemented steel tubes. 
 <img src="http://www.maths-in-industry.org/past/ESGI/34/Shfg1.gif" align="top&quot;" height="400" width="700">

Figure 1:  Complete cemented well, showing  the telescopic arrangement of casings and liners.
The sequence of fluids pumped down the steel tube is often either: chemical wash followed by spacer fluid  
followed by cement slurry, or it is: chemical wash followed by cement slurry. The cement slurry is heavier 
than the spacer, which is heavier than the wash. There is sometimes a mechanical device separating the 
spacer and cement slurry, preventing mixing, but rarely any separation between the spacer and wash. 
Consequently, a frequent situation is that of a heavy spacer or cement slurry pushing a lighter wash down 
an inclined circular tube. This situation is mechanically unstable and the problem concerns predicting the 
behaviour of this two fluid system.

</p><h4>Some typical job parameters:</h4>
</p><ul><li> The chemical wash is a fluid with density and rheology that are very close to that of water, (density 
about 1000kg/m3, viscosity about 0.001Pa.s). The spacer is a fluid with density at least 1300kg/m3 and 
a non-Newtonian rheology. Slurries have non-Newtonian rheologies and densities above 1500kg/m3. 
</li><li> Spacers may or may not have a yield stress; they are often shear thinning, (e.g. power law with index 
between 0.2 and 0.7). Cement slurries have a yield stress, say between 1 Pa and 15 Pa, but may also be 
shear thinning. A Herschel-Bulkley model covers the range suitably. The main feature thought to be 
relevant to this problem is that the spacer is very viscous by comparison to the wash. A typical 
effective viscosity for the spacer is in the range  0.03 Pas to 1.0 Pas.
</li><li> The fluids are typically pumped at flow rates in the range 5.0-25.0 l/s in a tube of diameter 0.1m, 
(small liner), increasing up to the range 20.0-30.0 l/s in a tube of diameter 0.22m, (production casing), 
and further increasing up to 20.0-40.0 l/s in a tube of diameter 0.32m, (top casing).  These figures give 
Reynolds numbers in the range: Rewash = 30,000 – 320,000; Respacer ~ Respacer=50-5000. 
</li><li> The spacer and slurry Reynolds numbers are not defined precisely since the fluids are non-Newtonian. 
For the majority of cases it is fair to say that the spacer and slurry are in a laminar regime. It is only if 
the pump rates are very high and/or if the fluids are extremely shear thinning that transitional and 
weakly turbulent flows are achieved. By contrast the wash is nearly always strongly turbulent. 
</li><li> The tube is inclined between 10 and 70 degrees to the vertical and can be between 500m and 5000m 
long. Typical fluid volumes pumped can correspond to a pipe-length of between 100m and 2000m.
</p><h4>Problem outline:</h4>
A chemical wash is part of many cementing job designs. The wash is included for its supposed efficiency in 
displacing mud from the walls of the annulus. There is therefore the implicit assumption that the wash does 
not mix significantly with the fluid that pushes it down the steel tube. Evidence for this assumption being 
true is weak. 
We would like to better understand the validity of this assumption. We are unable to conduct large scale 
experiments. We are unable to measure much that happens during a cement job, apart from surface 
pressures and flow rates. It is almost impossible to persuade a client to experiment with a real well. 
We believe that mixing generally occurs during downward displacement, as described. Mixing here could 
mean either that the heavy fluid by-passes the lighter fluid or could mean local mixing; (these fluids are 
compatible and this type of mixing is just a concentration of different fluids).
 <img src="http://www.maths-in-industry.org/past/ESGI/34/Shfg2.gif" align="top&quot;" height="400" width="700">

Figure 2: Different types of mixing during downward displacement.

</p><h4>Specific Questions:</h4>
1. Can the study group suggest any physical mechanism by which the fluid stages could be kept apart 
during downward displacement? No mechanical aids.<br>
2. Considering either interpretation of mixing, (or any another), can the study group give a reliable 
estimate of how fast the length of the mixed region will grow, as a function of the process parameters, 
(diameter, inclination, flow rate, fluid densities &amp; rheologies)?<br>
3. There is sometimes a little freedom to change flow rates and pipe diameters in cementing, although the 
length of the pipe remains fixed. What is the effect of either of these changes on the length of mixed 
zone? <br>
4. What happens if we stop pumping during downward displacement?<br>
5. Finally, can the study group compile a ``list of simple truths''  for this mixing process? Examples of 
what could go on this list are:
</p><ul><li> If you pump faster the mixed region will get to the bottom faster and have less time to grow.
</li><li> If you use a smaller pipe, (i.e. at the same flow rate), the mixed region will get to the bottom faster 
and have less time to grow.
</li><li> The only relevant dimensionless parameters are X,Y,Z.
Any more complicated statements may also go on this list, but they must be true.

Grundfos would like a model, describing the problem of mixing chemicals, being dosed into water systems, to be developed. The application of the model should be dedicated to dosing aqueous solution of chlorine into swimming pools.

The problem is imagined to contain two sub-models. The first model is concerned about dosing a strong aqueous solution of chlorine into a pipe system and the second about injection of purified and chlorinated water into the swimming pool.

The contamination of the water and the chemical process reducing the chlorine content in the swimming pool could be regarded as uniform and stationary and dependent of the number of bathers.
with Aughinish Alumina

The Bayer process of alumina production (Al2O3.10H2O) from bauxite ore has been known since 1888. In the 1950s Kaiser developed a high temperature digestion technology which is now used throughout the world.

The basis of the process is digestion of mineral-containing ore in a water solution of caustic soda (NaOH, 13%). The digestion of alumina requires high temperatures (250 deg C) and pressures. Heat is introduced through direct injection in the slurry of high pressure steam. The reason for the direct heating is to avoid scaling - formation of deposits on walls of pipes and apparatuses (the biggest problem of the process). After digestion the dissolved alumina has to be separated from the slurry and this is done by crystallisation, which requires low temperatures (40 deg C). Therefore the huge amount of heat introduced for digestion has to be removed. After precipitation of the alumina crystals the slurry is recycled and so must be heated again.

The Bayer process is highly energy intensive. From the energy management point of view there are two main streams to be considered: (1) alumina bearing "pregnant liquor" to be cooled from the digestion temperature of 250 deg C to the crystallisation temperature of 40 deg C. (2) alumina free "spent liquor", which has to be heated back to digestion temperature.

Very significant energy savings can be made by transferring the available heat of the hot stream ("pregnant liquor") to the cold stream ("spent liquor"). Indeed the digestion heat recovery system is the most significant energy aspect in the process.

Standard heat exchangers based on thermal conduction through the walls of pipes cannot be used because solid (silica) deposits tend to form rapidly decreasing the heat transfer rate and further blocking pipes, passages and units. Therefore this type of industry uses a flash heat interchange system in which depressurisation induces boiling (flashing) of the pregnant liquor. Flashing releases water steam, cools the pregnant liquor (heat of the evaporation is removed) while the generated steam is directed to a condenser, where the spent liquor is heated (gaining heat of condensation). Auhinish Alumina's setup uses multiple flash modules (flash tank with attached condensing heat exchanger).

The alumina production business faces continual pressure to increase the throughput and efficiency (energy spent per tonne of product). This requires continuous improvement of energy management including flash heat exchange System. Aughinish Alumina, although statistically the best in the business (in terms of energy efficiency), still looks towards further operational cost cuts. They are interested in creating of a general mathematical model of the flash heat exchange process which could be used to help with better energy utilisation. The model should be able to cope with varying: the number of stages, the size and pressure drop in each stage, the balance between driving force distribution and sizes of the heat exchangers and describe the negative effect of accumulated non-condensable gases. 
The problem is to model the flow through a typical Danfoss thermostatic radiator valve.

Danfoss is able to employ Computational Fluid Dynamics (CFD) in calculations of the capacity of valves, but an experienced engineer can often by rules of thumb "guess" the capacity with a precision similar to the one achieved by the expensive and time consuming CFD calculations. So CFD is only used in case of entirely new designs or where a very detailed knowledge of the flow is required.

Even though rules of thumb are useful for those, who have developed them, Danfoss wants an objective and general method, which can be used to calculate the capacity of valves.

One proposed solution is to identify the significant parts of the interior geometry, quantify the influence and model the valves as a sum of resistors in series. The model should be able to predict the capacity with a precision of 10% in the interesting range of capacities.
Within a hurricane season – is there a tendency, under some conditions, for groups of hurricane tracks to follow a large scale steering pattern?  Can the steering pattern be identified in some sense?   What is the unconditional probability a steering pattern will exist in a given year?   Can this probability be made conditional on large scale climate variables (ENSO etc) with any skill?
Thermal Ceramics produce a variety of different fibre compositions from a melt stream via the use of vertical spinning discs.

Standard production uses large contra-rotating wheels. Generally higher tap rates require larger diameter wheels. This process produces a lot of unfiberised material as well as the fibre required, which has a mean diameter of about 2 microns. Generally this unfiberised material is in the form of small spherical beads of glass which the industry refers to as shot. The shot particles of course can be anything in size from a few microns up to over 1000 microns in diameter. However in practice we find that there are few shot particles < 44 microns in diameter. For ease of measurement Thermal Ceramics therefore just measures the shot content as being all non fibrous material (shot) which has a diameter > 44 microns. The measurement is carried out by sieving and reported as the weight percentage. This number usually varies between 45-55% by weight. The majority of the particles, fall within the range 75-250 microns and will form up to 75% of the weight of shot measured. Recent work has demonstrated that it is feasible to reduce this shot content down significantly.

!Recent work -- Importance of Shot
Shot is detrimental to the product in 2 major areas.
Shot increases thermal conductivity. If it wasn't there, the thermal conductivity would at the very least remain the same, but empirical data suggests that removing shot reduces thermal conductivity. Fibre is generally sold as a needled blanket with densities of the order of 100kg/m&sup3. Less material could therefore be sold for the same insulating effect making costs lower and the price lower. One of the big advantages of using fibre over bricks is the savings on thermal mass and therefore any reduction in density will enable savings to be made by customers in their applications.

A growing area of fibre use is in the automotive industry, which requires clean fibre with zero shot. Costs can therefore be improved if the initial shot can be reduced. This maximises production of fibre for cleaning and will reduce the need for investment in new expensive cleaning equipment as the market increases.

!Modelling work
We have demonstrated it is possible to reduce shot significantly in the process. We have proposed a set of parameter, which may influence shot content. We need to move towards production scale, where it is forseen that there may be a problem in scale-upt. Experiments to do this are expensive and time consuming as larger scale approaches full production and the problems associated with running trials in such an environment. Modelling could aid to reduce development time and understand potential problems early.

Some of the parameters proposed to be important include: Melt temperature, spinner speeds & angles, air stripper design and melt stream drop height.

We need to explore how the melt transfers onto the spinners and what kind of melt layer exists on the spinners and then how this breaks up into droplets, which are flung off to become fibres. Then we can try to understand what happens as the spinners are increased in size and the melt tap rate is increased. Hopefully the model will show what parameters are important and whether there is a gradual decrease in effectiveness or a watershed. 
Safety is of paramount importance both in storage and deployment of explosives in both military and civilian applications. Explosives are typically surrounded by other inert materials which contain, protect or confine them. Under normal circumstances there is very little movement of the explosive relative to its surroundings and no significant hazard. However, unforeseen events can occur, typically where the explosive is hit by some projectile accidently, or possibly deliberately, when the resulting deformation of the explosive can cause motion of the explosive relative to its surroundings, sometimes with simultaneous compression.

It can occur that a resulting frictional interaction at the surface of the explosive results in sufficient heating to initiate it; that is, that the explosive starts to decompose. If conditions are adverse then the reaction can grow to thermal runaway and in extreme cases disastrous detonation. Attempts to model this physical process with a Lagrangian code such as DYNA struggle. This is because the onset of high shear in the explosive in the vicinity of the surface causes massive mesh distortion. It is believed that there is a layer where extreme shear deformation of the explosive is occurring. This may or may not coincide with a layer of melting and may or may not be a boundary layer depending on the loading.
High explosives have a crystalline structure, but it may be reasonable to suppose that the explosive may be modelled as an elastic-plastic material which melts to become a viscous fluid. Under those circumstances what insights can mathematical methods give? Could e.g. asymptotic methods have a useful role to play? An absolutely ideal outcome of the Study Group would be simple analytical formulae that allowed understanding of the physics and chemistry, but it is recognized that this is very unlikely. A much more realistic goal is the statement of some key boundary-value and initial-value problems, which if solved, would aid our understanding and the offering of some potential solution methods.

A possible approach is to investigate a series of ideal 2-D problems, starting with a steady-state incompressible formulation, ignoring heat effects and the reaction completely, then allow transient effects, moving on to an incompressible formulation with heating but no reaction, then to include compressibility, and the Arrhenius reaction of the explosive, etc. etc.

It is hoped that there should be sufficient scope here to engage the academics over the week and to lead to some fruitful academic research.
Modelling Temperatures in Cold Rooms
Food Refrigeration and Process Engineering Research Group

!Brief description of the problem
This problem is regarding a model that is currently being constructed to model the cold rooms that are used in the food industry for food storage. The project intends to produce an easy-to-use program for modelling the temperatures in these cold rooms, with the aim of predicting the food temperature in any particular cold room set-up.

This model should run on a standard PC in a few minutes maximum and predict the temperature history of the food in a cold room. The model will have the same kinds of inputs and outputs as a previous model created by FRPERC, named CoolVan.

!Previous model
For a description of CoolVan, see the research section of the FRPERC website:


Click on 'Modelling' on the left-hand side navigation bar and then on 'CoolVan' on the navigation list at the top of the page.

!Current model
The cold rooms that are to be modelled consist of a room with insulated walls, within which is one or more refrigerated units to cool the room air, to which is/are mounted one or more fans to blow the refrigerated air around the cold room.

A cutaway view of a simple cold room is displayed below, showing the fans, and the interior of the room including a shelf unit that would have food placed on it.

Problem 1 Figure 1

The model will be able to predict how food temperature will change over time, rather than calculating them only at steady-state conditions as some models do.

The main problem at the moment is that the model requires a simple way of predicting air flow around the room, given only a few variables, such as:

* room dimensions (height, width, depth)
* fan position
* speed and direction of air exiting the fan
* distribution of food in the room (at a later date - this is not currently in the model)

The project has access to CFD modelling but this model cannot, for the sake of simplicity and speed, incorporate full CFD codes into the model. We can use both CFD and empirical measurements in developing the simpler model required.

!More detailed description of the problem

!!The model
The current model divides the room and walls into cuboid spaces ('blocks'). The food is not currently represented in the room (although it will be at a later date), so the room can be considered to be 'empty'. Door openings, and the ensuing ambient air infiltration, will also be represented at a later date.

A diagram is shown below for a simple room, divided into the 5 layers in the depth direction. The example room has 1-block thick wall and a room that is 2 blocks wide, by 2 blocks high, by three blocks deep. The front and back layers are entirely wall blocks (white cubes), the intermediate 3 layers are room blocks (grey cubes) surrounded by wall blocks, with the back, upper two blocks being where the fans are in this model (darker grey cubes).

Problem 1 Fig2

In the final model, much more complicated rooms will be modelled, with many more room blocks, to enhance accuracy.

!!The airflow
The air movement around the room is represented by the rate of air movement across each of the six faces of the room air blocks, which is used for calculating the heat transferred around the room by the air. Heat transfer between the air blocks and the wall blocks is also calculated, and that between the air blocks and the food within the block will be added at a later date.

The same room as that described and shown above is displayed below, showing the room blocks (and surrounding wall blocks) from the side. This view also displays the flow of air that was used in this model. The flow passes from the top, back block (where the fan is) across the face of the middle, top block and from there into the bottom middle block and the top, front block. The flow from the top, front block passes into the bottom, front block and from there into the bottom, middle block where it rejoins the air from the top, middle block to carry on to the bottom, back bloc and from there back to the original block. This flow was considered to be symmetrical and so both layers in the width direction would contain the same flows.

This is a simplified vision of the flows, which was used in the first models to ensure that the airflow part of the model was working. When the model is working as it should, these flows will need to be calculated within the model, rather than being specified beforehand, as they were in this case of the earlier models.

Problem 1 Fig3

CFD predictions have been, and are still being, carried out to assess if there are any empirical rules for the flows in such a room. It is assumed that the model will calculate the airflows either as a result of equation-based calculations, if they can be created, by a rule-based system, or by a look-up table if no other method is available. It is unlikely that incorporating full CFD-type equations into the model would result in a model that could run fast enough to be useful to the food industry, where rapid results are required.

!!The solution techniques
The temperatures in all of the blocks that are used to represent the different aspects of the room are represented by a set of linear equations. Each equation is an energy balance for the block in terms of the future temperatures of the surrounding blocks and the current and future temperatures on the block. The equations must therefore be solved simultaneously for the future temperatures. This method is preferred as it allows much larger time steps while still maintaining stability. The new temperatures at each time step are calculated by using matrix solution methods. At the moment, an enhanced version of a matrix triangulation method is being used to calculate the temperatures.

We would like to find a faster solution method, because as the complexity of the model increases (i.e. the number of blocks in the model increases) it will become very slow on a pc (say 300MHz). A faster solution method would be likely to rely on the sparsity of the matrix for more efficient solution.
The project has been proposed by a public sector institution. The purpose is to make an attempt to develop models and measures to evaluate the effectiveness of funds utilization for scientific research and advanced technologies development, especially their long term effects.
Consumer products such as shampoo or tomato sauce are designed so that they appeal to consumers, encouraging them to buy those products. To that end, the industrial R&D organisation tends to focus on understanding and manipulating product attributes. However, buying behaviour is not only a function of the product: it is also, and in some cases perhaps more so, a function of the consumer, his social environment of other consumers, the marketplace with its competing products, and the brand marketing strategy. In order to design the best product, it is necessary to understand not just the physics and chemistry of the product, but also the psychology of consumers and the sociology of consumer groups or networks.

Our goal is to have a model of the marketplace that describes certain aspects of consumer buying behaviour. There are two main parts to such a model:
* A description of a population of consumers, which each choose (buy) repeatedly one of a number of competing brands (we can ignore the difference between product and brand in this case). This subdivides into a description of the behaviour of a single consumer (consumer psychology model), and of the collective behaviour of a group, in other words of the interactions between consumers (consumer sociology model).
* A description of brand management: agents (brand managers) change the attributes of a brand such as price or quality in response to events in the marketplace.

Traditional marketing models tend to focus on the second element, and treat the large number of consumers or customers in a very macroscopic, averaged way: e.g. they only look at market share for each brand. Thus a constant market share can be a result of a dynamic equilibrium, but this macroscopic viewpoint cannot see or describe this. Alternatively, one can focus on individual consumers and their buying behaviour, and try to derive observable large scale effects, like changes in market share. We see an analogy with the situation in physics: the traditional macroscopic view of thermodynamics was later shown to result from the averaged behaviour of populations of individual molecules (statistical physics).

Traditional market models are typically in the form of differential equations, e.g. describing market share as a function of time. It would already be interesting to consider adding (random) spatial variations, to account for different consumer preferences. In Appendix 3 we describe an agent based model on which we have performed simulation studies; here approaches from statistical physics may be useful. Also epidemiology may give ideas, or traffic flow models, and brand management may perhaps be approached with game theory. Ideally we would like to connect the microscopic consumer viewpoint to the macroscopic viewpoint of the brand manager with an encompassing description (again, analogous to statistical physics/kinetic theory/thermodynamics).

Three specific challenges we would like to pose to the Study Group are as follows, with further background given in appendices (of the PDF version of the problem):
* Construct a market model that exhibits the decoy effect (explained in Appendix 1).
* Are customer interactions (social networks) needed for lock-in to occur (explained in Appendix 2), or can consumer psychology explain this (cf. Appendix 4)?
* How can we formalise consumer and market insights in a mathematical model? (E.g. so we can investigate under which conditions/assumptions brand sales figures exhibit non-Gaussian fluctuations. Observations have shown such.)

A valid model would need to show at least one or more of these effects qualitatively.

Further details are available in the [[PDF  version of the problem description|p/esgi/49/unilever2.pdf]].
with Vistakon

Vistakon Ireland (Johnson & Johnson Vision Products) manufacture contact lenses. In the initial stages of the lens manufacturing process, moulds for the front and back of the lens are produced: these are called the front curve and the base curve respectively. The front curve is filled with monomer and then the base curve is pushed in place on top of it. The monomer is then polymerised by exposure to ultraviolet light. The group is asked to develop mathematical models of two stages of the process with the aim of suggesting ways in which they can be redesigned to avoid common problems and increase efficiency.
# To fill the moulds, monomer is first sucked from a tank by a piston then forced into the front curve by the same piston. During the suction phase cavitation bubbles can form. During the expulsion phase monomer flow continues after the pump has stopped moving. Waiting for this to stop (so a controlled dose of monomer is delivered) wastes time.
# When the base curve is pushed down on the front curve it is essential that the monomer spreads evenly over the front curve. Sometimes the spreading is asymmetric, resulting in an uneven monomer overflow. 
* 2007, Aug 20-24: Montreal (Canada). [[M-IPSW 1|M-IPSW 1]]
* 2008, Aug 18-22: Montreal (Canada). [[M-IPSW 2|M-IPSW 2]]
* 2009, Aug 17-21: Montreal (Canada). [[M-IPSW 3|M-IPSW 3]]
* 2011, Aug 15-19: Montreal (Canada). [[M-IPSW 4|M-IPSW 4]]
Thicker coatings of paint made up of water, polymer latex, titanium dioxide pigment and with no organic solvents, exhibit mudcracking when they dry. This is due to a build up of stress in the drying film. There is a need to develop a theory of mud mechanics which shows the role of the pigment volume concentration in the paint thickness and the latex viscosity in developing stress in the film.

The aim is to reduce the build up of stress in the drying film to avoid crack initiation.
Considering all kind of products which are of some use on our daily life, specially thermoplastic or metallic products obtained or manufactured by moulds or special tooling, this problem goal is to evaluate the impact, as well as the value of this specific industrial sector - Engineering & Tooling - in the national economy. Such impact should be measured in terms of perceptible value, instead of annual/mensal revenue.

Iberomoldes expects the perceptible value for the end user to be much larger than the annual/mensal revenue of the corresponding sector, but the following points are still unclear:
# How much larger is this perceptible value.
# What is the weight of the sector in the national economy and how representative is the sector of the full economy.
The {{{<<newTiddler>>}}} macro displays a button that can be clicked to create a new tiddler. By default, the new tiddler is opened in edit mode or you can specify a custom template.

The available parameters are:

|!Parameter |!Description |
|label |The text of the button |
|prompt |The tooltip for the button |
|accessKey |The access key to trigger the button (specify a single letter; different browsers require a different modifier key like Alt- or Control-) |
|focus |Which of the edittable fields to default the focus to (eg, "title", "text", "tags") |
|template |The template to use to display the new tiddler (defaults to EditTemplate) |
|text |The default text for the new tiddler |
|title |The default title for the new tiddler |
|tag |A single tag to be applied to the new tiddler (repeat this parameter to specify multiple tags) |

For example: <<newTiddler label:"try this" accessKey:1 focus:tags text:"hello there!" tag:greeting tag:"an example">> (can also be triggered with Alt-1)
<<newTiddler label:"try this" accessKey:1 focus:tags text:"hello there!" tag:greeting tag:"an example">>

You can only prime the initial values of fields that map to a text input box in the specified template (for instance, if you specify the standard ViewTemplate as the template you won't be able to prime any fields). For example, this doesn't work as you might expect:
<<newTiddler template:ViewTemplate text:"To be or not to be">>
<<newTiddler template:ViewTemplate text:"To be or not to be">>
!Leak Noise Generation in Underground Water Pipes
Mecon Ltd., Cambridge

The hissing noise from leaks in buried water pipes is detectable by attaching accelerometers to accessible fittings (e.g. fire hydrants) and it provides a means of locating leaks. In outline, two accelerometers are put on fittings say 100m apart, and the signals from them cross-correlated. If the leak is between them then the lag of the maximum of the cross-correlation gives you an idea of where the leak is. If it is not then the sign of the lag tells you which direction along the pipe to search for the leak.

The main problem that we wish the Study Group to focus on is: What are the noise generation mechanisms at leaks and what information do the acoustic characteristics of the noise tell us about the leak, in particular about its size?

!Additional information
The water pipes may be of metal or plastic and these have different characteristics. In plastic pipes the high frequency noise is much more rapidly attenuated along the length of the pipe, and detection is based on low frequencies, e.g. up to 200Hz in [1]. In metal pipes, high frequency noise is more prominent, e.g. the experiments carried out by Mecon using a DI pipe ([2], DI = ductile iron) show resonances at 3.3kHz, 4.175kHz, and a broad maximum around 16kHz. There are also some differences between steel (longitudinal wave speed around 5.9km/s), ductile iron (5.6km/s), and cast iron (somewhat anisotropic, and a lower speed, around 4.4-4.8km/s). There are also differences in the shape of leak that pipes are prone to: a leak in a steel pipe generally arises from a pit that deepens until it forms a circular hole; but a leak in a cast iron pipe is more frequently a crack. Plastic pipes usually fail at butt-welded joints.

!Current position
Leak noise tests carried out by Mecon and reported in [2] show noise spectra measured at different points along a 5m length of DI pipe, for a range of leak sizes and a range of water pressures. There is a broad peak at around 16kHz whose frequency is independent of pressure and leak size. The height of the peak rises with pressure to a maximum of about 140 ± 5dB (relative to 1 uPa), the height of this maximum being roughly independent of leak size. At low frequencies however, the noise levels are very sensitive to leak size, e.g. below 100Hz the noise from an 8mm diameter hole is 70 dB louder than from a 1mm hole. The original data from the tests can be made available for re-analysis if required. Mecon are already in touch with the NDT group at Imperial College about the propagation modes in a fluid-loaded shell.

!Discussion points
Areas where Mecon would like the Study Group to take things forwards:

# What is the noise generation mechanism?
# What part of the spectrum is due to cavitation, and what part due to unsteady flow separation, or to other processes?
# Does the relative importance of these change with frequency, and does it change between plastic and metal pipes, does it change with pressure in the pipe, with pressure outside the pipe or with backfill type?
# How can leak size be estimated?
# What experiments (e.g. on longer runs of buried pipe) would be most valuable to develop this work further? 

[1] Acoustical Characteristics of Leak Signals in Plastic Water Distribution Pipes. Osama Hunaidi and Wing T. Chu. Applied Acoustics Journal. Available from http://fox.nrc.ca/irc/fulltext/nrcc42673.pdf
[2] Leak Noise Tests. Mecon Report.
[3] Lapshin BM and Nikolaeva ED. Influence of the size of a through defect on acoustic emission in escape of liquid into liquid from a hole in a thick wall of piping. Sov J NDT 26, 11-July-1991, 811-816.
Many diabetics must measure their blood glucose levels regularly to maintain good health (Appendix 1). In principle, one way of measuring the glucose concentration in the human body would be by measuring optically the glucose content of the aqueous humor in the eye.

Lein Applied Diagnostics wish to assess how feasible this is,
# purely by a system using a linear confocal scan (Appendix 2) with an LED source, as described below; and
# by supplementing such a system with other suitable measurements.

The sensitivity of the refractive index of the aqueous humor to the glucose concentration is of the order of one part in 105 for a change in glucose concentration of 5mg/dl, and concentrations of between 50mg/dl and 400mg/dl need to be detected reliably.

The use of a confocal scanning technique enables one to measure the optical depth of the aqueous humor to this required accuracy. The optical depth, D, is given by L/n where L is the physical depth of the anterior chamber and n is the refractive index of the aqueous humor. This direct measurement cannot be made in practice as the real depth of the anterior chamber changes due to corneal swelling and accommodation of the ocular lens.

!The Problems
# Is it possible to use other information obtainable from the confocal microscopy to resolve this point. In particular the measurement detects retro-reflections from the front and back of the cornea, and from the front and back of the lens, in addition to the measurements giving the location of the various surfaces. Do these retro-reflections provide the necessary information?
# If the scan can only tell us the optical depth, what else could be measured that would enable the refractive index to be obtained to the required accuracy? In particular, can this be achieved by any of (or some combination of) the following:
## Taking measurements at different wavelengths of light. Two wavelengths allow the measurement of the dispersion of the aqueous humor, which is a function of the glucose concentration.
## Taking several measurements, say one when the subject is focusing on infinity and one when he or she is focusing in the near-field.
## Use of polarization (since glucose is optically active).
## Use of spectroscopic techniques.
## Other suggestions 

The appendices are contained in the [[PDF problem description|p/esgi/49/lein.pdf]]. 
In some spinning processes, split blowers are used to transport and stretch filaments. Pressurised air is forced through a nozzle to create a high speed flow. The objective of the study is to minimise the pressurised air while maintaining sufficient frictional forces on the surface of the filaments. Probably one of the main variables is the geometry of the blower. 
Rangeland Foods

Rangeland foods sells a number of beef products. They use a specific recipe for each of them. A key element for the final product is its weight percentage of fat. Depending on the recipe, this parameter may vary between 17% and 36.5%. The meat used for these products is bought with an approximate fat content value estimated visually by the butchers. When this meat enters the storage facility of Rangeland, its fat content is measured with a scanner and the results are accurate within 1%. The meat divided and stored in crates. About 20 crates of meat are mixed for each recipe. The group will be asked to determine which crates should be used with the following constraints:
*The planning should be organised with available stocks.
*The fat content for each final product must be within 1% of the target value.
*Older meat should be used first.
*The meat is stored in crates either as fresh meat (temperature above 0 °C), chilled (-5 °C) or frozen (-20 °C). Both fresh and chilled meat can be used immediately. It takes 48 hours to go from frozen to chilled.
*To minimise concealment, crates from the same origin should be used together.Time permitting, the group could also investigate an optimal way of storing the meat in the cold store that would minimise the movement of the crane storing and picking up the crates from their storage. Rangeland food uses a 40 X 13 stack storage facility.
with Carton Brothers

Manor Farm Chickens produce chickens for the Irish market. They are interested in optimising the match between the demand for chickens in various weight categories and the supply brought in. The mismatch between supply and demand must be accommodated by substituting in chickens from a higher weight category if demand in that category cannot be met. This is economically inefficient. The group is asked to develop an algorithm to optimally schedule egg incubation and chicken slaughtering times in order to minimise this mismatch. Inputs to the process include forecasted demand (available 13 weeks in advance), actual demand (known on the day), capacities of chicken growing farms, and current and projected weights of birds on the farms. 
with Intel

Conferencing hardware solutions typically optimise voice quality by mixing a selected number of voice streams (participants), rather than blindly mixing all streams. At present, this requires the decoding of all incoming streams, and then using a loudest speaker algorithm to determine which streams are to be retained. The goal of this problem is to use the encoded G.729A bit stream to determine the loudness level of the voice stream. Using the encoded stream instead of decoding first would save substantially on computation cycles (G.729A is the second most popular coded in the market, but is also the most compute expensive). So the questions for the Study Group are:

# Can the speech synthesis parameters in the G.729A encoded stream be used to calculate an equivalent Voice Energy value?
# Can subframe pitch delay and codebook data be used to calculate Voice presence and Voice level? 
The project has been proposed by a non-profit organization. The main challenge is to propose a methodology to generate investment proposals and periodic reports for the purpose of the educational game VCR. Proposed methods should make the game world resemble a real venture capital market.
<h1 align="CENTER">Strategic Resource Planning for Optimum Service Quality</h1>
<p align="center">Gail Lochtie, BT</p>
Within BT, delivering a quality service, in terms of meeting customers' 
expectations on provisioning and repair of basic telephony, is key to 
its operations. In order to provide a flexible and comprehensive service
 to customers, BT's operational units are organised into small work 
teams. These teams have a constant pool of <i>N</i> engineers available to carry out the work. The type of work to be completed can be classified into one of <i>j</i> job types. Each job must be completed within the time limits set for work within that job type.

The time limits set for a given type of job are related to customer 
expectations of the time taken to repair or provide a service. 
Therefore, BT has repair and installation target times which are 
expressed in terms of the fraction, <i>M</i>, of jobs completed in less than a predetermined lapse time,  <img alt="tex2html_wrap_inline51" src="http://www.ma.hw.ac.uk/%7Eandrewl/ESGI/BT/img1.gif" align="BOTTOM" height="8" width="9">, where  <img alt="tex2html_wrap_inline51" src="http://www.ma.hw.ac.uk/%7Eandrewl/ESGI/BT/img1.gif" align="BOTTOM" height="8" width="9">
  is defined as the time between a job being requested and completed. 
Thus, to accurately predict whether the target will be met for the <i>j</i>th job type, the probability density function  <img alt="tex2html_wrap_inline57" src="http://www.ma.hw.ac.uk/%7Eandrewl/ESGI/BT/img2.gif" align="MIDDLE" height="29" width="40">  must be known.
The distribution function  <img alt="tex2html_wrap_inline57" src="http://www.ma.hw.ac.uk/%7Eandrewl/ESGI/BT/img2.gif" align="MIDDLE" height="29" width="40">
  will depend the method used to allocate or schedule jobs to individual
 engineers, the time taken by the engineer to travel to the job and the 
time the engineer takes to complete the job. Job scheduling is currently
 undertaken by the Work Manager software, which assigns jobs to 
engineers whilst attempting to meet the predetermined time targets for 
each job. Thus, the exact form of  <img alt="tex2html_wrap_inline57" src="http://www.ma.hw.ac.uk/%7Eandrewl/ESGI/BT/img2.gif" align="MIDDLE" height="29" width="40">
  will be dependent on the strategy which is employed within Work 
Manager to meet the targets. Information on the time taken to complete 
each job can be obtained from the job allocation schedule, and this 
information can be used to build a probability distribution of the 
length of time taken to complete a given job type. Data collected from 
the Work Manager system, suggests it is reasonable to assume that  <img alt="tex2html_wrap_inline57" src="http://www.ma.hw.ac.uk/%7Eandrewl/ESGI/BT/img2.gif" align="MIDDLE" height="29" width="40">  follows a Rayleigh distribution. Thus, the probability density function for the <i>j</i>th job type, with the random variable  <img alt="tex2html_wrap_inline51" src="http://www.ma.hw.ac.uk/%7Eandrewl/ESGI/BT/img1.gif" align="BOTTOM" height="8" width="9">, is taken to be
</p><p> <img alt="equation12" src="http://www.ma.hw.ac.uk/%7Eandrewl/ESGI/BT/img3.gif" align="BOTTOM" height="50" width="500"> </p><p>
where  <img alt="tex2html_wrap_inline69" src="http://www.ma.hw.ac.uk/%7Eandrewl/ESGI/BT/img4.gif" align="MIDDLE" height="17" width="16">  is the average lapse time.
The targets are defined as the fraction, <i>M</i>, of jobs to be completed in less than a given target time <i>T</i>. Thus, if this target is to be met for the <i>j</i>th job type, then the following inequality must be satisfied
</p><p> <img alt="equation18" src="http://www.ma.hw.ac.uk/%7Eandrewl/ESGI/BT/img5.gif" align="BOTTOM" height="42" width="500"> </p><p>
which implies
</p><p> <img alt="equation21" src="http://www.ma.hw.ac.uk/%7Eandrewl/ESGI/BT/img6.gif" align="BOTTOM" height="42" width="500"> </p><p>
Thus, an expression can be obtained which, for a given job type <i>j</i>, relates the average lapse time,  <img alt="tex2html_wrap_inline79" src="http://www.ma.hw.ac.uk/%7Eandrewl/ESGI/BT/img7.gif" align="MIDDLE" height="26" width="11">  to the target time  <img alt="tex2html_wrap_inline81" src="http://www.ma.hw.ac.uk/%7Eandrewl/ESGI/BT/img8.gif" align="MIDDLE" height="26" width="16">  to complete the fraction of jobs  <img alt="tex2html_wrap_inline83" src="http://www.ma.hw.ac.uk/%7Eandrewl/ESGI/BT/img9.gif" align="MIDDLE" height="26" width="22"> .
Since all the <i>j</i> job categories are carried out by a common resource pool of engineers, the  <img alt="tex2html_wrap_inline87" src="http://www.ma.hw.ac.uk/%7Eandrewl/ESGI/BT/img10.gif" align="MIDDLE" height="29" width="61">  distributions, and thus the parameters  <img alt="tex2html_wrap_inline79" src="http://www.ma.hw.ac.uk/%7Eandrewl/ESGI/BT/img7.gif" align="MIDDLE" height="26" width="11">, cannot be considered as independent of each other. Hence, their interaction may be expressed  as
</p><p> <img alt="equation27" src="http://www.ma.hw.ac.uk/%7Eandrewl/ESGI/BT/img11.gif" align="BOTTOM" height="42" width="590"> </p><p>
where  <img alt="tex2html_wrap_inline91" src="http://www.ma.hw.ac.uk/%7Eandrewl/ESGI/BT/img12.gif" align="MIDDLE" height="26" width="16">  is determined by the scheduling algorithm, and represents the probability of engineers working on job type <i>j</i>,  with the constraint
</p><p> <img alt="equation32" src="http://www.ma.hw.ac.uk/%7Eandrewl/ESGI/BT/img13.gif" align="BOTTOM" height="53" width="500"> </p><p>
<u>Specific Questions</u>
For a given size of work force <i>N</i>, what are the values of  <img alt="tex2html_wrap_inline91" src="http://www.ma.hw.ac.uk/%7Eandrewl/ESGI/BT/img12.gif" align="MIDDLE" height="26" width="16">  such that the values of <i>T</i> and <i>M</i> are optimum?
So far, solutions to this problem have been obtained numerically. This 
is a very time consuming process. Are there alternative mathematical 
methods which enables the solution to be obtained, or bounds to be 
placed on the numerical search space?
The formulation given above is only one way of considering the problem. 
Are there are more appropriate methods of formulating and solving the 
<u>Experimental Data</u>
The main experimental data available are data taken from the scheduling 
algorithm which allow the determination of the probability density 
functions for a selection of values of <i>N</i> and  <img alt="tex2html_wrap_inline91" src="http://www.ma.hw.ac.uk/%7Eandrewl/ESGI/BT/img12.gif" align="MIDDLE" height="26" width="16"> .
* 2010, Dec 13-17: Chongqing (China). [[China Study Group| China Study Group]]
* 2011, Jan 10-14: Palo Alto, California (USA). [[Sustainability Problems| Sustainability Problems]]
* 2011, Jul 11-13: Oxford (UK). [[Mathematics-in-Eyes Study Group| Mathematics-in-Eyes Study Group]]
<div class='header'>
 <div class='titleLine'>
 <span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;
 <span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
<div id='sidebar'>
 <div id='sidebarSearch' macro='search'></div>
 <div id='mainMenu' refresh='content' tiddler='MainMenu'></div>
 <div id='sidebarOptions' refresh='content' tiddler='KubrickSidebar'></div>
 <div id='sidebarTabs' refresh='content' force='true' tiddler='SideBarTabs'></div>
<div id='displayArea'>
 <div id='messageArea'></div>
 <div id='tiddlersBar' refresh='none' ></div>

<div id='tiddlerDisplay'></div>
<div id='contentFooter'><p>These pages are part of the <a href="http://www.maths-in-industry.org/">MIIS Website</a>. Please send any comments to <font color="blue"> miis</font> (at) <font color="blue"> maths.ox.ac.uk</font>.</p>

Paperikoneen viiraosan vedenpoisto, Oy Keskuslaboratorio - Centrallaboratorium Ab


Paperikoneessa viiralle suihkutetaan kuitususpensiota, jolla on alhainen sakeus. Kun vesi poistuu viiran läpi, kuidut kerääntyvät kerroksiksi viiran päälle. Kuitumaton kasvaessa alimmat kuitukerrokset puristuvat kasaan, jolloin kuituverkoston huokoisuus pienenee ja veden virtausvastus kasvaa. Tarkoituksena on rakentaa malli kuitumaton muodostumiselle viiran päälle ja ratkaista se. Mallin tulisi kuvata veden poistumisnopeutta viiran läpi. Yksinkertaisemmassa ongelmassa kuitumaton kasvu jätetään huomioimatta. Sen sijaan tarkastellaan vakiopaksuista kuituverkostoa, jonka huokoset ovat alkutilanteessa täyttyneet vedellä.
In Agriculture the key point is optimizing the production at limited costs. For decennia already, one has tried for the best and one cannot do without a computer anymore. In present-day greenhouses the control of the inner climate is fully automated.

The inner climate could be held constant, but for the optimization of the production it is necessary to adapt (?) the inner climate to the conditions outside the greenhouse. The importance of this can be illustrated by the effect of passing showers: if the farmer (?) does not anticipate with the, possibly sharp, temperature decrease that is due, this could mean a delay of one week for the production. Hence a swift and adequate reaction is of utmost importance.

Theoretically, many models have been made that try to connect the climatic conditions to the resulting production of the weed (?). Unfortunately, farmers (?) do not profit much from the insight that is obtained by the present models. Most studies are aimed at one particular type of weed (?), but the characteristics of different types often differ significantly. Perhaps even more important: the characteristics are not constant throughout the year, whereas the present models account for them with fixed parameters.

Phytocare would like to turn (?) this situation and be able to advise farmers (?) with this new approach. The idea is the following. The climate computer applies specified amounts of moist, light, nutrients, etc. At the same time, the inner climate is measured: every 5 minutes the computer provides data on a.o. temperature, humidity and light (luminiscence?) in the greenhouse. For each plant, one will know the conditions in which it lived for every 5 minutes. Moreover, one can measure the production from the plants themselves: for tomatoes, for example, one can count the increase of the weight of fruit for each plant within a certain period. For roses, the weed (?) most of Phytocare's advises deal with, the production can be measured by the growth of branches per week; indeed (immers?), a branch can be harvest as soon as it reaches the required length to be sold. Measuring the production can only be done on a longer timescale however. Usually the production is measured every week. Using the measured climatic conditions in the greenhouse and the production per week, we would like to find species-specific parameters for the plants, for example by fitting them to the data. As explained before, one of the complications is the fact that the climate measurements are carried out every few minutes, whereas the production can be measured on a weekly basis only. With the parameter values obtained, the approach can be reversed again: are their rules of thumb to be given to the farmer (?) by which their climate control can increase the production at reasonale costs? Different scenarios could be calculated to advise the farmer (?).

Questions to be raised are, for example: Can Phytocare advise farmers (?) how to optimize their production using a model which has been fitted to the individual farmer (?) and his greenhouse by the above describe approach? Or at least find out when the photo-synthetic process (?) of the plants is optimal (optimal production is closely related to optimal photo-synthesis)? Is the model accurate enough in order to calculate if, for example, certain investments in the greenhouse (to optimize the production) will increase the production sufficiently to justify the investments?
A company has a large software system that has become costly to maintain. What they need to do is break the system into smaller more manageable modules. Of course, this is a job for a trained expert. However, when the system is large an automated suggestion for a partition into modules is useful.

Given a Call Graph (Nodes are programs or classes or similar. Edge a->b means program a uses program b) we need to partition the nodes into sets in manner that favors edges between nodes in the same set and minimizes edges between nodes in different sets. We are interested in algorithms that exploit only the structure of the Call Graph.

The size and complexity of a Call Graph can vary considerably depending on the underlying language and call level. For instance a typical COBOL system on the program level will contain 1000's of nodes whereas a Java system on a method level can be over one million nodes.

Good solutions are useful for various graph sizes.
Also available as a [[pdf-file|p/esgi/52/callgraph.pdf]] (with picture). 
In search of a quick method to forecast the quality of a flight schedule


KLM flies to over 150 destinations with 97 aircraft. Four times a year, a new flight schedule is developed. Though the operational feasibility is taken into account up to a certain degree during the development process, the aim at that stage is mostly to maximise the number of seats that can be sold. During schedule development, KLM considers various commercial aspects such as the expected demand per destination and the number of possible transfer connections at Schiphol Airport.

The schedule is usually published as an Aircraft Rotation Schedule. This is a diagram that shows how flights are assigned to the 97 aircraft. The figure below shows an example. This Rotation Schedule is different each week. This is because each day many adaptations are made so as to minimise delays. For instance, if we know that an aircraft will arrive at Schiphol Airport with a delay, we could try to assign its next flight to another aircraft so that that flight can still leave on time. Usually, we will then need a couple of other adaptations to have all flights fit in the Rotation Schedule again. When a schedule is first published, we don?t know the exact layout of the Rotation Schedule, so we publish a hypothetical "average" one instead.

{{c{~~Part of an Aircraft Rotation Schedule as commonly published within KLM. Each row represents the assignment of an aircraft. The coloured lines display the various activities (flights, maintenance, reserves).~~}}}

Before a schedule is published, an estimation of the expected punctuality -that is the percentage of "on time" flights - is performed using a simple deterministic model. Because this model lacks accuracy, a simulation model is currently being developed in order to enable a better forecast. This model simulates aircraft movements according to a given schedule. The model subjects the schedule to a "stress test" by generating various disruptions such as air traffic congestion, delays during the boarding process or unexpected problems during maintenance. Throughout the simulation, a Problem Solver tries to resolve delays by swapping flights in the Rotation Schedule, or in extreme cases by cancelling them. The better he succeeds, the better the schedule is considered fit to be flown. The schedule is assessed according to the punctuality that can be achieved in this way and to the number of cancellations.

A simulation, though, has several disadvantages. Processing times usually are long, which limits the number of scenarios that can be simulated at one time. Also, we need to collect a lot of data about the processes that are being simulated. For the simulation model currently under development we need statistics about the variation in the actual flight duration, about the variation in the time it takes to handle an aircraft on the ground (boarding, fuelling, catering, etc.), about break down times of each aircraft type, and so on. Each of these statistics must constantly be updated to reflect the change in flight routes, working methods, fleet, etc.

We would be very happy if we had a simple model that would enable us to make a comparative statement, such as:

//"Of a number of alternative schedules, schedule X will provide the best performance."//

Is there a simpler method to evaluate the performance of a given flight schedule?
agraph plot(sin(x)) endagraph
width=300; height=200; xmin=-5; xmax=5; xscl=1;
Part of the planning of each KLM flight consists in determining the quantity of drinking water to take on board. Risking a water shortage is not an option, wheareas a surplus will cost unnecessary extra fuel.

However, reading off the level of the water tanks can not be done accurately: on most airplanes, the water level is displayed rounded off to the nearest 1/8th of a tank. This affects the accuracy of the forecast data and makes it impossible to fill the tank with an exact amount of water. For this reason, KLM maintains a safe but expensive margin on the amount of drinking water on board.

As an example, consider the stretch Amsterdam - New York. On this trip, taking one litre of extra water on board would cost about EUR 0.10. Consistently rounding off to the next highest one eighth of a tank would mean taking 100 litres of water in surplus on average. On a yearly basis, this would add up to EUR 7,500.00 for just this one destination.

Could a strategy be devised, given the described handicap, which could enable KLM to always take a sufficient quantity of drinking water while minimizing the costs of the safety margin? 
 Dublin City Council

!!Part I

A Real-Time Passenger Information (RTPI) system for bus and light rail is in the process of being rolled out on a nationwide basis by the National Transport Agency (NTA). Dublin City Council are providing the technical implementation for services encompassing physical street signs, SMS messages, a public web site and a number of smart phone applications. Currently there are some eighty physical street signs in place in Dublin and a website that provides predictions for five hundred and fifty of the four and a half thousand Dublin Bus bus stops. However, the noisiness and variability of prediction data has considerably slowed the progress of the roll-out.
Although the system will eventually cater for Bus Eireann, private bus operators and the LUAS, currently the only user of the system is Dublin Bus. Prediction times for when a bus will arrive at a particular stop are generated by software designed by Init Systems for Dublin Bus and forwarded to Dublin City Council. This information is subject to certain constraints such as a look ahead window and a maximum number of buses to receive information for. Currently Dublin Bus has placed a limitation of five hundred and fifty bus stop “subscriptions” for the predictions their software generates. It is possible that their servers can be upgraded to handle a thousand subscriptions but it is uncertain and the original goal of obtaining four and a half thousand subscriptions looks unlikely by this method.
Dublin City Council also receives all of the GPS location co-ordinates of every in-service bus in the Dublin Bus fleet, subject to the bandwidth constraints of the Dublin Bus private radio network. At peak times this amounts to almost one thousand one hundred buses. In practise we find that the bandwidth limitation amounts to a location update for each bus every thirty seconds. The location is calculated using differential GPS and is said to be accurate to within five metres. Other information provided includes schedule deviation, whether a bus is at a stop or not and whether a bus considers itself to be in congestion or not.
Dublin City Council is seeking to answer two separate and distinct but related questions about the system.
Assuming it is not possible to provide accurate predictions from just the location information stream provided (due to the close proximity of bus stops to one another within a city and the infrequency of updates), what additional information would be required to deliver a system that can accurately predict the time that a particular bus will arrive at a particular stop? If this additional information were present, what level of complexity or processing constraints might be encountered for a system attempting to generate predictions for one thousand one hundred buses servicing four and a half thousand bus stops?
The stated aim of the NTA for the project is to achieve 98% accuracy of predictions for the system. Assuming the location information to be always accurate, how could Dublin City Council approach verifying whether the predictions are sufficiently accurate? The current approach is to manually survey sites but this is both time consuming and expensive.

!!Part II

The Northern Cross Junction that intersects the N32 with the Malahide Road is one of the busiest junctions within the Dublin City Council boundaries. Close to 50,000 vehicles a day can travel through this junction. Traffic engineers expend considerable effort fine tuning the traffic phasing and voting pattern algorithms that decide what phases to run or skip in an attempt to maximise junction throughput.
One item that rarely gets examined is the total cycle length (i.e. the amount of time it takes to cycle from the beginning of the first phase to the end of the last phase). This is capped by convention at a maximum of 120 seconds across the entire city.
We would ask that an optimum cycle length for this junction under peak load be calculated to see if it varies greatly from the 120 second convention currently used.

[[More information (pdf)|p/esgi/82/DublinCityCouncil_Part2.pdf]]
The Real Time All Vehicle Simulator (RTAVS) harness is a tool used by QinetiQ, predominantly for aircraft simulation. Within this harness there is a need to calculate, in real-time, the impact point of air-to-ground munitions dropped from aircraft flying at altitudes of up to 30,000 feet. The falling munitions are subject to drag during its flight. The drag force is dependant on the air pressure and density which vary with altitude. This makes the equation of motion for the falling munitions non-linear. Currently the impact point is predicted by extrapolating the equation of motion in discrete time steps. This method is rather crude and we would like to know whether there is an alternative. The alternative will have to work within the same constraints of the existing method. That is, any calculations will have to complete in one 50Hz harness cycle and have no detrimental effect on other processes being performed in the same cycle.
From the Nederlands Forensisch Instituut there are two related problems,
one about so-called toolmarks and the other about shoeprints. Toolmarks appear
when at a burglary a tool (a screwdriver or crowbar) is being used to break a door
or window open. This tool leaves a mark: in the door post for example a mark or scratch
is made, a toolmark. It is also possible that at the place of crime of for example a
murder, burglary or bank robbery a shoeprint is found.

{{c{A comparison of a trace of the crime scene (left) and of the tool of the suspect (right).}}}

In forensic matters and criminalistics, these traces or marks are being used.
With a special substance, the marks can be preserved up to the finest detail.
When later on the police finds a suspect, they can check whether tools or shoes of the
suspect could have left these traces. The traces from the place of crime can be compared
to test traces of the tools or shoes of the suspect.

The question for the study group is to determine the chance that the trace found on
the place of crime is made by the tool or shoe of the suspect. In other words, to
design a probability model. More detailed descriptions for the "toolmarks" and
"shoeprints" are available in Dutch (see below).

{{c{The traces of a shoeprint found at the crime scene and of a shoe from a suspect. Damages are marked.}}}


!Forensisch onderzoek van werktuigsporen.
!!Op basis waarvan kunnen werktuigsporen nu worden vergeleken?
De figuur toont het microscoopbeeld van een zogenaamde aansluiting. Aan de linkerzijde van de deellijn is de afvorming van een plaats-delictspoor te zien en aan de rechterzijde een proefspoor gemaakt met een schroevendraaier gevonden bij een verdachte. Het spoor van de schroevendraaier vertoont een serie fijne en grovere kraslijnen. Die corresponderen met kraslijnen of onregelmaigheden op de schroevendraaier zelf. Daar zijn ze deels ontstaan tijdens het fabricageproces bijvoorbeeld tijdens het slijpen. Daarnaast ontstaan door het gebruik van de schroevendraaier kleine beschadigingen van de vouw, het uiteinde van de schroevendraaier. Die laten in het plaats-delictspoor eveneens kraslijnen achter. Andere werktuigen vertonen soortgelijke kraspatronen. Te denken valt daarbij aan breekijzers, messen, scharen, bijlen hamers, et cetera. Ze spelen een rol bij inbraken maar ook bij zware levensdelicten. Ook spelen kraslijnen een bepalende rol in een andere, omvangrijke tak van forensisch onderzoek, dat van kogels en hulzen. Kraslijnenbeelden kunnen op afwijkende manieren zijn veroorzaakt, de identificatieproblematiek is echter vaak dezelfde.

Het identificeren van krassporen gebeurt veelal met behulp van een vergelijkingsmicroscoop. Daarbij worden de kraslijnen van een proefspoor en een plaats delict spoor naast elkaar geprojecteerd en wordt bekeken of de kraslijnen aansluiten, in elkaars verlengde liggen. Het vergelijkend onderzoek wordt uitgevoerd door ervaren onderzoekers die op basis van een lange training en ervaring een beslissing nemen of er voldoende aansluitende lijnen zijn. De
onderzoeker nuanceert daarbij zijn oordeel door het uit te drukken in een waarschijnlijkheidsgraad. Ofschoon het identificatieproces in de praktijk vaak en met succes wordt toegepast zijn er naar wetenschappelijke maatstaven toch een aantal onzekerheden aan te geven. Die zijn door de onderzoekers zelf onderkend, maar ook van buiten de forensische wetenschap wordt het identificatieproces in toenemende mate kritisch bekeken.

!!De huidige werkwijze.
Bij het vergelijkend krasspooronderzoek kan het volgende worden aangetroffen (waarbij wordt uitgegaan dat de kraslijnen in het proefspoor zijn veroorzaakt door een karakteristiek deel van de schroevendraaier):
* geen aansluitende kraslijnen;
* enkele kraslijnen verspreid over de hele breedte van het spoor sluiten aan;
* een groepje kraslijnen sluit aan;
* ver de hele breedte sluiten (bijna) alle kraslijnen aan;

Tot nu toe is de hoogte van de conclusie afhankelijk van de hoeveelheid kraslijnen dat aansluit, en in mindere mate van de breedte van het aansluitende deel ten opzichte van de breedte van het gehele spoor.

De bovenstaande criteria botsen met elkaar. Namelijk:
* Een spoor met breedte van 10 mm sluit aan met een proefspoor van een breekijzer met een breedte van 30 mm. Omdat het aansluitende deel niet de hele breedte van het werktuig betreft zal de conclusie waarschijnlijk zijn.
* Maar in het geval van een spoor met een breedte van 8 mm dat aansluit met een proefspoor van een schroevendraaier met een breedte van 8 mm, zal de conclusie zijn dat het spoor met de schroevendraaier is veroorzaakt.

!!De probleemstelling.
In bovenstaande beschrijving liggen in ieder geval drie kernproblemen besloten.
* De deskundige vindt op basis van zijn kennis en ervaring dat er voldoende aansluitende lijnen zijn. De vraag is echter of dat ook wetenschappelijk beter onderbouwd kan worden. Met andere woorden, is er een kansmodel te ontwikkelen op basis waarvan onder de aangegeven condities de kans op een chance match kan worden ingeschat.
* Hoe wordt het kansmodel als sprake is van een niet-optimaal lijnenbeeld. Zo'n situatie doet zich bijvoorbeeld voor als maar een gedeelte van een spoor voor de vergelijking beschikbaar is of als het spoor wel volledig maar gering van omvang is.
* In andere, vergelijkbare takken van het forensisch onderzoek komen ook sporenbeelden voor waarvan de zeldzaamheid moet worden ingeschat. In afwijking van het werktuigsporen onderzoek bezit het meetinstrument c.q. de waarnemer een zekere onnauwkeurigheid. Bij de beoordeling van de afzonderlijke kraslijnen (of meer in zijn algemeenheid kenmerken) is er een zekere kans op onjuiste beoordeling. Zo'n situatie doet zich bijvoorbeeld voor als in de positie van de afzonderlijke lijnen een zekere variatie voorkomt. Binnen die variatie kwalificeert de waarnemer twee lijnen als aansluitend, maar in feite is er een zekere kans dat die beslissing onjuist is. Hoe verandert het kansmodel als wordt gewerkt met een onnauwkeurig meetinstrument?

Uit deze meer in algemene zin geformuleerde vragen is er ook een aantal vragen af te leiden die direct op aan de praktijk ontleende situaties zijn te herleiden.
* Stel, er wordt met empirisch onderzoek aangetoond dat bij vergelijking van de sporen van bekende verschillende (karakteristieke) sporenveroorzakers er in geen van de gevallen meer dan 5 achtereenvolgens aansluitende kraslijnen worden aangetroffen (Ook al zijn er misschien meer dan 100 kraslijnen aanwezig in het totale spoor). Wat zegt een dergelijke uitkomst voor het praktische onderzoek? Sommige Amerikaanse werktuigsporenonderzoekers rechtvaardigen, op basis van dit onderzoeksresultaat, zeer verregaande (zekere conclusies) indien 5 of meer aansluitende lijnen worden aangetroffen. Andere onderzoekers zijn in zo'n geval aanmerkelijk terughoudender. Wie heeft er gelijk? Is het tellen van lijnen  überhaupt een geschikte basis en zo ja zijn er klassegrenzen aan te geven?
* Als er eenmaal een kansmodel ontwikkeld is, hoe moeten experimenten c.q. steekproeven worden ingericht om de parameters van het kansmodel betrouwbaar in te schatten, bijvoorbeeld om informatie als hierboven genoemd, te verkrijgen?
* Zou de hoogte van de conclusie alleen afhankelijk moeten zijn van het aantal aansluitende kraslijnen of ook van het percentage van het aansluitende deel van het spoor ten opzichte van de hele breedte van het werktuig?


!Forensisch onderzoek van schoensporen.
!!Het vergelijken van schoenafdrukken.
Tussen schoenen van verschillende merken en maten bestaan wat wordt genoemd verschillen in klassekenmerken. Het profiel van een schoen van merk A kan afwijken van dat van merk B. Door veranderingen in het fabricageproces kunnen vervolgens batches van elkaar verschillen. Van grotere waarde voor het identificatieproces zijn echter beschadigingen en afwijkingen die door het gebruik zijn ontstaan. De positie en vorm daarvan kunnen worden vergeleken met onregelmatigheden in de veiliggestelde afdruk. Bij het vergelijkingsproces speelt, zoals bij veel vergelijkend forensisch onderzoek, de training en ervaring van de deskundige een rol.
{{c{Een beeld te zien van de afvorming van het plaats-delict-spoor en van de schoenzool zelf. Beschadigingen zijn gemarkeerd.}}}

!!Wat is de praktijk?
De onderzoeker beoordeelt van een beschadiging zowel de positie als de vorm en kijkt of die in het plaats-delict-spoor overeenkomen met die in op de schoenzool en hiermee vervaardigde proefsporen.
Bij het vaststellen van de conclusie laat de deskundige zich leiden door drie parameters, te weten:
* de karakteristieke waarde van de beschadigingen;
* de mate van overeenkomst tussen de beschadiging en onregelmatigheid;
* het aantal overeenkomende beschadigingen /onregelmatigheden;

Bij het inschatten van de karakteristieke waarde spelen twee aspecten een rol, namelijk van:
* de afmetingen van de beschadiging (lengte en breedte). Achterliggende gedachte is dat hoe groter een beschadiging is, hoe minder vaak deze voorkomt, en des ter hoger de karakteristieke waarde is
het aantal componenten waaruit de beschadiging bestaat. Hier wordt verondersteld dat een beschadiging bestaande uit 6 componenten een hogere karakteristieke waarde heeft dan een beschadiging bestaande uit 2 componenten.
{{c{Beschadiging met 6 componenten en 2 componenten.}}}

De te trekken conclusie is mede afhankelijk van het aantal overeenkomsten. Bijvoorbeeld wanneer in een spoor een lijnonregelmatigheid van 2 mm wordt aangetroffen, die qua plaats, richting en vorm overeenkomt met een beschadiging in een schoen, wordt geconcludeerd dat het spoor mogelijk is veroorzaakt met de schoen. Hierbij speelt een rol dat een lijnbeschadiging het meest voorkomt en dat een lijn in het spoor ook kan zijn veroorzaakt door een verontreiniging (bijvoorbeeld een deel van een grassprietje of dennennaald). Bij een meer complexe vorm wordt de kans kleiner dat de onregelmatigheid in het spoor is veroorzaakt door een veront­reiniging. Bij het aantreffen van elke volgende overeenkomst van hetzelfde type neemt de deskundige aan dat de kans op een toevallige overeenkomst sterker dan lineair toeneemt en op basis daarvan zal ook zijn conclusie sterker worden. Daarbij wordt de volgende reeks gehanteerd.

Gehanteerde conclusiereeks:
* is veroorzaakt met ...
* zeer waarschijnlijk veroorzaakt met ...
* waarschijnlijk veroorzaakt met ...
* mogelijk veroorzaakt met ...
* niet kon worden vastgesteld dat ...

Uit bovenstaande beschrijving blijkt de invloed van de deskundige - en daaraan gekoppeld diens kennis en ervaring - op het identificatieproces. In wetenschappelijke termen zouden delen van het identificatieproces met meer objectieve maatstaven moeten worden onderbouwd. In algemene zin komt dat neer op de ontwikkeling van een kansmodel dat enerzijds alle aspecten die een rol spelen bij het proces omvat en dat het anderzijds ook mogelijk maakt om langs expe­ri­mentele weg een betrouwbare schatting te krijgen van de relevante parameters. Het kansmodel zou met toenemende complexiteit kunnen worden opgebouwd.
* Gegeven een beschadiging op een bepaalde positie op een schoenzool. Wat is de kans op een chance match bij een willekeurige andere schoenzool met verder dezelfde klassekenmerken (profiel e.d.)? Het gaat daarbij om de positie, vorm en oriëntatie van de beschadiging. Vermeld zij dat in de forensische literatuur wel modellen zijn ontwikkeld op basis van de indeling van de schoen in een aantal vakken. Die blijken echter inconsistent
* Hoe wordt het model als er sprake is van meerdere beschadigingen op dezelfde schoen?
* Hoe wordt het model indien ook de door de deskundigen als belangrijk beschouwde complexiteit van de beschadiging in rekening wordt gebracht? Is er een maatstaf voor de complexiteit van een beschadiging te ontwikkelen? En zo ja is er een correlatie tussen complexiteitsmaat en de chance match?

Naast de ontwikkeling van het model zelf wordt ook een experimentele schatting van parameters ervan van belang geacht. Daarbij is rekening te houden met een aantal praktische beperkingen. Zo is het uit tijdsoverwegingen niet doenlijk om per geval de zeldzaamheid van de beschadingingsvorm te bepalen. Als in het ene geval een bescha­diging wordt aangetroffen die uit drie componenten bestaat zou langs experimen­tele weg de kans op het ontstaan van dat type beschadiging kunnen worden ingeschat. Als in een nieuwe zaak vervolgens een beschadiging van vier componenten wordt aangetroffen zou ook daarvan weer de ontstaans­kans kunnen worden bepaald. Efficiënter lijkt het echter om een model te ontwikkelen waarmee de toename van de kans per toegevoegd component wordt bepaald. In algemene zin kan de frequentie van een beschadiging slechts via steekproven worden ingeschat. Daarbij zouden wel nog experimenten kunnen worden uitgevoerd waarbij bijvoorbeeld door het volgen van een schoendrager wordt vastgesteld binnen hoeveel tijd zich beschadigingen ontwikkelen.
Dat alles leidt tot de volgende vraag:

Hoe moet een experiment/steekproef worden ingericht om de parameters van het kansmodel voldoende betrouwbaar in te schatten?
The Acordis acrylic fibres plant in Grimsby operates thirteen production lines, extruding four basic polymer types to make fibres. There are twelve key variables which define the end product. All changes to these variables take time (some more than others) and low grade product or waste is produced during the changeover. The most common product change is in the fibre colour. We believe that optimised production scheduling can reduce the number or duration of the changes. Production scheduling is currently carried out by one skilled person who has to consider both production issues and customer requirements. We are looking for a tool to both help the scheduler and assist the production team leaders who periodically have to re-jig the production schedule outside of day hours and at short notice. 
The problem involves a transducer supplied to Nan Gall by Solartron.  It is used for measuring fluid density and viscosity.  Nan Gall would like to apply this transducer for down oil-well applications.  The transducer has not been used down hole before though it has been used in petroleum processing plants.  Nan Gall buy only the transducer without Solartron’s electronics and software.  This is because Solartron’s electronics will not fit in our pressure rated housing, is not rated to down-hole temperatures and draws too much current for battery operation.  See Solartron’s website:


We do not have a confidentiality agreement with Solartron.  However, they are aware that we are doing our own electronics development and research into down-hole applications of the transducer.  They have told me that they have performed some simulation of the system in the past but have not released the results to us.

The transducer is based on the principle that the resonant frequency of an element is dependant on the density of the fluid in which it is immersed.  This is presumably because some of the fluid is dragged with the vibrating element altering the effective mass.  The viscosity of the fluid applies a damping force to the system.  The Q of the resonance therefore decreases with increasing viscosity.

A tuning fork design is used because it is immune to external sources of vibration.  The tuning fork is excited by a //driver// piezoelectric element.  The resulting motion of the tuning fork is sensed by a //pick-up// piezoelectric element.  The voltage applied to the driving piezo is proportional to the stress applied to the tuning fork.  If the pick-up piezo is open circuit, the voltage obtained from the pick-up piezo is proportional to the strain.  Alternatively, if the pick-up piezo is short circuit, then the current output is proportional to rate of strain.

Nan Gall’s electronic circuit applies a phase shift to the signal from the pick-up piezo and amplifies it to a constant peak-to-peak level.  This voltage is then applied to the driver piezo.  Depending on the phase shift applied it is possible to vibrate the tuning fork “on-resonance” or to either side of the resonance.  For example, if the pick-up is short circuit then:
* 0° phase shift causes vibration on resonance.
* +45° phase shift causes vibration at 3dB down point with higher frequency (driver is 45° in front of pickup).
* -45° phase shift causes vibration at 3dB down point with lower frequency (driver is 45° behind pickup).
//The transducer manufacturers recommended operation at the upper 3dB point for measuring density.// Our experiments have verified that this is the case at least for fluids up to about 100cP viscosity, with a very good linear fit between frequency and density. 

Note that a simple model based on the equation of a simple damped harmonic oscillator: $$ m\ddot x = F-v\dot x-kx $$ (where `m` is the inertia (mass of tuning fork plus dragged fluid), `v` is the damping constant related to viscosity, `k` is the spring constant and `F` is the applied force) predicts that the resonant frequency is independent of viscosity.  The frequency of the upper 3dB point would appear to depend on viscosity.  Maybe this model is invalid because the volume of fluid dragged by the tuning fork is dependent on viscosity.

The questions that we would like to be addressed by the study group are the following:
# What is the best way to operate the transducer to determine fluid density.  Why and to what degree is the frequency of the upper 3dB point independent of viscosity?
# If the transducer is mounted in a cylindrical housing, how will this affect its operation?  The transducer will be mounted inside a housing that is smaller than what is recommended by the transducer manufacturer due to restrictions of running in an oil well.  This affects the resonant frequency.

<div class="Section1">

<h1>Problem exposition</h1>

<p>We consider a large population of
individual ‘units’. At any time, t, each unit is in one and only one of n distinct ‘states’.</p>

<p>Over time a unit may remain where it is or
switch to one of the other states. The
likelihood of switching out of a state is a function of its residence time, s,
the time it has been in the state, and the state it is in. Once a unit switches it starts in its new
state with residence time s=0. The
state a unit switches into is also state and residence time dependent.</p>

<p>We describe the switching time distribution
by suitable residence dependent function for each state. Similarly the residence dependent
destinations may be described by a set of transition probabilities.</p>

<p>What if all units are not quite the same – in
terms of their residence dependence – so there is extra variability/dispersion?</p>

<h1><span>First Problem</span><span></span></h1>

<p>Given a set of n switching time density

<p style="text-align: center;" align="center"><span>r<sub>i</sub>(s)
: R<sup>+</sup> </span><span style="font-family: Symbol;"><span>®</span></span><span><span>&nbsp;
</span>R<sup> +</sup>,<span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span><sub><img src="http://www.maths-in-industry.org/past/ESGI/43/Numbercraft/image002.gif" height="51" width="64"></sub>,<span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span>i=1,…,n</p>

<p>then we write<span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span></p>

<p style="text-align: center;" align="center"><span><sub><img src="http://www.maths-in-industry.org/past/ESGI/43/Numbercraft/image004.gif" height="51" width="107"></sub><span>&nbsp;</span>- the primitive,<span>&nbsp;&nbsp;&nbsp;&nbsp; </span>i=1,…,n</p>

<p>and </p>

<p style="text-align: center;" align="center"><span>r(s)
= diag{r<sub>i</sub>(s),…,r<sub>n</sub>(s)}</p>

<p style="text-align: center;" align="center"><span><span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span>R(s) = diag{R<sub>i</sub>(s),…,R<sub>n</sub>(s)}.<sub><img src="http://www.maths-in-industry.org/past/ESGI/43/Numbercraft/image006.gif" height="23" width="12"></sub></p>

<p style="text-align: center;" align="center"><span>&nbsp;</p>

<p style="text-align: center;" align="center"><span>&nbsp;</p>

</span>A(s) : </span><span style="font-family: Math5;"><span>Ñ</span></span><sup><span>+</span></sup><span> </span><span style="font-family: Symbol;"><span>®</span></span><span> {n</span><span style="font-family: Symbol;"><span>´</span></span><span>n non negative matrices, identically
zero on the diagonal such that A(s)<b>.u</b>
= <b>u </b>}, where <b>u</b> = (1,1,…,1)<sup>T</sup>, for all s </span><span style="font-family: Symbol;"><span>³</span></span><span> 0</p>

<p style="text-align: center;" align="center"><span>A<sub>ij</sub>(s)
= P[switch from state i to state j | switch happens after residence time s].</p>

<p style="text-align: center;" align="center"><span>&nbsp;</p>

<p>Suppose a unit is in state i. What is the probability it switches out
between (s, s+</span><span style="font-family: Symbol;"><span>d</span></span><span>t) <i>given</i>
that it did not switch prior to s?</p>

<p style="text-align: center;" align="center"><span><sub><img src="http://www.maths-in-industry.org/past/ESGI/43/Numbercraft/image008.gif" height="45" width="111"></sub></p>

<p>Let C<sub>i</sub>(s,t) = # of units in
state i at time t with current residence time of s.</p>

<p>For s&gt;0,</p>

<p style="text-align: center;" align="center"><span><sub><img src="http://www.maths-in-industry.org/past/ESGI/43/Numbercraft/image010.gif" height="24" width="228"></sub><span>&nbsp;</span>+ HOT</p>

<p style="text-align: center;" align="center"><span><span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
</span><sub><img src="http://www.maths-in-industry.org/past/ESGI/43/Numbercraft/image012.gif" height="47" width="235"></sub><span>&nbsp;</span>+ HOT</p>

<p style="text-align: center;" align="center"><span>&nbsp;</p>

<p style="text-align: center;" align="center"><span>&nbsp;</p>

<p style="text-align: center;" align="center"><span><sub><img src="http://www.maths-in-industry.org/past/ESGI/43/Numbercraft/image014.gif" height="69" width="337"></sub><span>&nbsp;&nbsp;&nbsp;&nbsp; </span>t&gt;0,<span>&nbsp;&nbsp; </span>s&gt;0.</p>

<p style="text-align: center;" align="center"><span>&nbsp;</p>

<p style="text-align: center;" align="center"><span>&nbsp;</p>

<p>The new arrivals switching into states
determine the boundary condition:</p>

<p style="text-align: center;" align="center"><span><sub><img src="http://www.maths-in-industry.org/past/ESGI/43/Numbercraft/image016.gif" height="51" width="247"></sub>.</p>

<p style="text-align: center;" align="center"><span>&nbsp;</p>

<p>In vector notation, <b>C</b> = (C<sub>i</sub>,…,C<sub>n</sub>)<sup>T</sup>, we have</p>

<h1 style="text-align: center;" align="center"><span><sub><img src="http://www.maths-in-industry.org/past/ESGI/43/Numbercraft/image018.gif" height="51" width="263"></sub><span>&nbsp;&nbsp;&nbsp;&nbsp; </span></span><span>t&gt;0.<span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span>(1)</span></h1>


<p style="text-align: center;" align="center"><span><sub><img src="http://www.maths-in-industry.org/past/ESGI/43/Numbercraft/image020.gif" height="25" width="191"></sub><span>&nbsp;&nbsp;&nbsp;&nbsp; </span>s&gt;0,<span>&nbsp;&nbsp; </span>t&gt;0.<span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span>(2)</p>

<p>[Check conservation of mass holds:</p>

<p style="text-align: center;" align="center"><span><sub><img src="http://www.maths-in-industry.org/past/ESGI/43/Numbercraft/image022.gif" height="51" width="405"></sub></p>

<p style="text-align: center;" align="center"><span><sub><img src="http://www.maths-in-industry.org/past/ESGI/43/Numbercraft/image024.gif" height="51" width="248"></sub></p>

<p style="text-align: center;" align="center"><span><span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span><sub><img src="http://www.maths-in-industry.org/past/ESGI/43/Numbercraft/image026.gif" height="51" width="241"></sub><span>&nbsp;&nbsp;</span></span><span style="font-family: Symbol;"><span>º</span></span><span> 0 as required]</p>

<p style="text-align: center;" align="center"><span>&nbsp;</p>

<p>Solution to (2) is</p>

<p style="text-align: center;" align="center"><span><sub><img src="http://www.maths-in-industry.org/past/ESGI/43/Numbercraft/image028.gif" height="21" width="188"></sub><span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span>(3)</p>

<p>since directly:</p>

<p style="text-align: center;" align="center"><span><sub><img src="http://www.maths-in-industry.org/past/ESGI/43/Numbercraft/image030.gif" height="25" width="304"></sub>.</p>

<p>Let <sub><img src="http://www.maths-in-industry.org/past/ESGI/43/Numbercraft/image032.gif" height="21" width="85"></sub>. Then (1) and (3)

<p style="text-align: center;" align="center"><span><sub><img src="http://www.maths-in-industry.org/past/ESGI/43/Numbercraft/image034.gif" height="51" width="179"></sub><span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span>(</span><span style="font-family: &quot;Monotype Sorts&quot;;"><span>S</span></span><span>)</p>

<p>A steady state solution: <sub><img src="http://www.maths-in-industry.org/past/ESGI/43/Numbercraft/image036.gif" height="21" width="59"></sub></p>

<p style="text-align: center;" align="center"><span><sub><img src="http://www.maths-in-industry.org/past/ESGI/43/Numbercraft/image038.gif" height="16" width="61"></sub><span>&nbsp;&nbsp; </span>and<span>&nbsp;&nbsp; </span><sub><img src="http://www.maths-in-industry.org/past/ESGI/43/Numbercraft/image040.gif" height="51" width="123"></sub></p>

<p>(Note <sub><img src="http://www.maths-in-industry.org/past/ESGI/43/Numbercraft/image042.gif" height="21" width="68"></sub>: <b><sub><img src="http://www.maths-in-industry.org/past/ESGI/43/Numbercraft/image044.gif" height="15" width="13"></sub></b><span>&nbsp;</span>is left
eigenvector for unit eigenvalue)</p>

<p>Hence <sub><img src="http://www.maths-in-industry.org/past/ESGI/43/Numbercraft/image046.gif" height="13" width="19"></sub><span>&nbsp;</span>exists.</p>

<p>What can be said more generally about the
general solutions of (</span><span style="font-family: &quot;Monotype Sorts&quot;;"><span>S</span></span><span>) and hence of (1) and (2)?</p>

<p>What sort of perturbations should we
consider in order to account for individual variability? (For example, suppose the r<sub>i</sub>s and
As depend very slightly on the unit’s identity. Can we justify (2) becoming</p>

<p style="text-align: center;" align="center"><span><sub><img src="http://www.maths-in-industry.org/past/ESGI/43/Numbercraft/image048.gif" height="25" width="227"></sub></p>

<p>for some </span><span style="font-family: Symbol;"><span>s</span></span><span> &gt; 0? </p>

<p>What can be said about the solution to this
coupled system?</p>

<p>What further extensions could/should we



!History of oysters
Flat oysters have been commercially cultured in the Eastern Scheldt Estuary (The Netherlands) since 1875. After the severe winter of 1962/1963, which caused a high mortality, the stock was diminished. Searching for alternatives, Dutch oyster farmers introduced the Pacific oyster to the Eastern Scheldt in 1964. The Pacific oyster is native to Japan. The species has also been introduced in several other areas, e.g. Australia, New Zealand, France, United Kingdom, Ireland and United States. At the time, introduction of Pacific oysters in the Eastern Scheldt was found acceptable because offspring was not to be expected at Eastern Scheldt latitude. However, first spatfall ( = settling of larvae on the bottom) on dike foots and jetties was recorded in 1976, and political pressure stopped importation of Pacific oysters in the following year. A second larval outburst in 1982 definitely settled wild Pacific oysters in the Eastern Scheldt waters, and oyster banks on intertidal (= area falling dry at low tide) and subtidal (= area under the low water line) areas have been growing ever since.

!Why are Pacific oysters a problem?
One of the main problems in the Eastern Scheldt concerning expansion of Pacific oysters is the potential interaction with commercial exploited species like cockles, blue mussels and cultivated oysters. Wild Pacific oysters can compete with these commercial species for food and space. Next to potential commercial losses due to a decline in shellfish quality, this competition for food and space can also affect the food availability and food quality for water birds, who mainly feed on mussels and cockles.

We would like to develop a model that can answer the following questions:
* How do oysters spread?
* Can development in the past be reconstructed?
* Can a prediction be made for future?
* Can the spreading of oysters be stopped?  How?

Although the expanding distribution of Pacific oysters in the Netherlands has been recognised as a possible problem, a oyster survey was not conducted until 1999 when RIVO performed a oyster survey in the Eastern Scheldt. In order to reconstruct from the situation in the past, aerial photographs of 1980 and 1990 were contributed by RIKZ and used by RIVO to identify oyster fields. Together with the oyster map of 2002 they give an impression of development of intertidal Pacific oysters in time and space.

!More Information
!!The Eastern Scheldt
The Eastern Scheldt is situated in the South West of the Netherlands. It is a former estuary. After the storm floods of 1953 the Delta Project started, in which the Eastern Scheldt was isolated from river input by dams and a storm-surge barrier was built on the seaward side of the estuary. The construction of this storm-surge barrier took place between 1979 and 1986.
!!Life cycle
Pacific oysters have separate sexes. First they develop as male, later in their lifetime they function as female. Oysters spawn (= process
of releasing eggs and sperm into the water) from their first year on. Spawning occurs in July and August.
Conception takes place in the water phase. The concepted egg cell develops into a larva within one day. These larvae live and grow in the water phase for 15 to 30 days (pelagic period). During this period they can travel large distances, depending on the current velocities and direction. After the pelagic period larvae drop from the pelagic (water) phase to the bottom (spatfall) and try to find a suitable place to settle. Oysters prefer hard substrates on the bottom, like rocks and stones. However, they can also settle in soft, sandy sediments. Settled oysters grow from April to October, with maximum growth occurring in June. The maximum age Pacific oysters can reach is still unknown, but they seem to reach ages of 20 to 30 years.
!!Feeding habit
Oysters are filter feeders, which means they filter the sea water and select particles they can use as feed. Algae are the most important source of food. Food activity is defined in clearance rate, which is the volume of water cleared of particles per time unit. Clearance rates of Pacific oysters are well researched, and depend on a large number of variables. Average clearance rates from several studies vary between 2 and 10 l/h/g.
School of Life Sciences, Keele University

<html><h4><font face="arial,sans-serif,times">Introduction</font></h4>

<p><font face="arial,sans-serif,times">Many of the proteins found in modern man
        and other successful species have evolved over hundreds
        of millions of years through various lineages and species.
        Many of these proteins have important roles in defence
        against invading micro-organisms and thus the evolutionary
        changes have important restrictions in order to maintain
        function albeit in a global rather than a specific sense.
        One such family of proteins is the pentraxins which play
        a key role in innate (non-adaptive as opposed to antibody
        based) immunity, the two major members of the family being
        C-reactive protein (CRP) and serum amyloid P-component (SAP).
        Both have been found in all species in which they have been
        sought, and in particular in the ancient invertebrate "living
        fossil" Limulus polyphemus (the horseshoe "crab") and in man
        where CRP is the major acute phase reactant produced in response
        to tissue damage and inflammation. CRP levels are routinely and
        universally measured in man as a clinical indicator of
        underlying infection.</font></p>

<h4><font face="arial,sans-serif,times">Genes and Proteins: the basics</font></h4>
<p><font face="arial,sans-serif,times">Proteins are composed of long chains of <strong>amino
        acids</strong> (around 200 in the case of the pentraxins); there
        are 20 different amino acids. Some of the amino acids in
        a protein are important in maintaining the structure of
        the protein and some of are essential to function. These
        requirements will vary depending on the protein involved. Each
        amino acid is coded for (defined by, produced by) a triplet
        of nucleotide <strong>bases</strong> (a <strong>codon</strong>)
        in the relevant piece of DNA.
        Rather than coding for an amino acid, some DNA triplets are
        "Stop" codons signalling the end of protein synthesis. There
        are four different bases C, G, A and T. A single change in one
        of these bases, from one of the four bases to another, may
        produce a change in the resulting amino acid and hence the
        final protein. Since there are only 20 amino acids, the
        genetic code is <strong>degenerate</strong> with some amino acids specified
        by sets of codons. As can be seen in Figure 1
        (the <strong>Genetic Code</strong>)
        a change in the third base of the codon often results in no
        change of amino acid, while a change in the first or second
        base of the codon usually does.</font></p>

<table border="0" width="67%">
<td width="16%"><br></td>
<td width="16%"><br></td>
<td width="16%"><br></td>
<td width="16%"><br></td>
<td width="16%"><br></td>
<td width="16%"><br></td>
<td width="16%"><br></td>
<td width="16%"><br></td>
<td rowspan="2" align="center">1st position<br>(base 1)</td>
<td colspan="4" align="center">2nd position (base 2)</td>
<td rowspan="2" align="center">3rd position<br>(base 3)</td>
<td align="center"><strong>T</strong></td>
<td align="center"><strong>C</strong></td>
<td align="center"><strong>A</strong></td>
<td align="center"><strong>G</strong></td>
<tr><td colspan="8">&nbsp;</td></tr>
<td rowspan="4" align="center" valign="center"><strong>T</strong></td>
<td align="center">PHE</td>
<td align="center">SER</td>
<td align="center">TYR</td>
<td align="center">CYS</td>
<td align="center"><strong>T</strong></td>
<td align="center">PHE</td>
<td align="center">SER</td>
<td align="center">TYR</td>
<td align="center">CYS</td>
<td align="center"><strong>C</strong></td>
<td align="center">LEU</td>
<td align="center">SER</td>
<td align="center">STOP</td>
<td align="center">STOP</td>
<td align="center"><strong>A</strong></td>
<td align="center">LEU</td>
<td align="center">SER</td>
<td align="center">STOP</td>
<td align="center">TRP</td>
<td align="center"><strong>G</strong></td>
<tr><td colspan="8">&nbsp;</td></tr>
<td rowspan="4" align="center" valign="center"><strong>C</strong></td>
<td align="center">LEU</td>
<td align="center">PRO</td>
<td align="center">HIS</td>
<td align="center">ARG</td>
<td align="center"><strong>T</strong></td>
<td align="center">LEU</td>
<td align="center">PRO</td>
<td align="center">HIS</td>
<td align="center">ARG</td>
<td align="center"><strong>C</strong></td>
<td align="center">LEU</td>
<td align="center">PRO</td>
<td align="center">GLN</td>
<td align="center">ARG</td>
<td align="center"><strong>A</strong></td>
<td align="center">LEU</td>
<td align="center">PRO</td>
<td align="center">GLN</td>
<td align="center">ARG</td>
<td align="center"><strong>G</strong></td>
<tr><td colspan="8">&nbsp;</td></tr>
<td rowspan="4" align="center" valign="center"><strong>A</strong></td>
<td align="center">ILE</td>
<td align="center">THR</td>
<td align="center">ASN</td>
<td align="center">SER</td>
<td align="center"><strong>T</strong></td>
<td align="center">ILE</td>
<td align="center">THR</td>
<td align="center">ASN</td>
<td align="center">SER</td>
<td align="center"><strong>C</strong></td>
<td align="center">ILE</td>
<td align="center">THR</td>
<td align="center">LYS</td>
<td align="center">ARG</td>
<td align="center"><strong>A</strong></td>
<td align="center">MET</td>
<td align="center">THR</td>
<td align="center">LYS</td>
<td align="center">ARG</td>
<td align="center"><strong>G</strong></td>
<tr><td colspan="8">&nbsp;</td></tr>
<td rowspan="4" align="center" valign="center"><strong>G</strong></td>
<td align="center">VAL</td>
<td align="center">ALA</td>
<td align="center">ASP</td>
<td align="center">GLY</td>
<td align="center"><strong>T</strong></td>
<td align="center">VAL</td>
<td align="center">ALA</td>
<td align="center">ASP</td>
<td align="center">GLY</td>
<td align="center"><strong>C</strong></td>
<td align="center">VAL</td>
<td align="center">ALA</td>
<td align="center">GLU</td>
<td align="center">GLY</td>
<td align="center"><strong>A</strong></td>
<td align="center">VAL</td>
<td align="center">ALA</td>
<td align="center">ASP</td>
<td align="center">GLY</td>
<td align="center"><strong>G</strong></td>
<tr><td colspan="8">&nbsp;</td></tr>
<tr><td colspan="8"><p><strong>Figure 1: The Genetic Code.</strong>
        For example, the DNA codon (base triplet) ATG gives the
        protein amino acid Met (methionine) and the codons CGT, CGC,
        CGA, CGT, AGA, and AGG give the amino acid Arg (arginine).</p></td></tr>

<h4><font face="arial,sans-serif,times">Mutation and Evolution</font></h4>
<p><font face="arial,sans-serif,times">A suitable simple view of protein evolution is
        that it arises from <strong><em>random</em></strong>
        changes (<strong>mutations</strong>) in amino
        acids, resulting from random changes (<strong>mutations</strong>) in the
        relevant coding DNA, and that these mutations are dominated
        by <strong>point mutations</strong> where a change in a single DNA base
        occurs. Other DNA mutations, not to be considered here,
        include insertions, deletions and frameshifts. Point
        mutations may have a variety of effects:</font></p>
<ol type="1"><font face="arial,sans-serif,times"><li><p><strong>Detrimental:</strong> A change in resulting amino acid
        and a change in resulting protein structure and/or
        function with an instant deleterious effect on the
        organism resulting either in its termination, in
        which case the mutated DNA will not be passed on to
        future generations and species, or in a genetic disease
        such as cystic fibrosis or sickle cell anaemia. Introducing
        a STOP signal before the protein is complete is another
        example. We need not consider this effect, except as a
        constraint, when looking at protein evolution in
        living, healthy creatures.</p></li>
<li><p><strong>Benign:</strong> No effect on the
        resulting protein as the new codon will code for the
        same amino acid (a <strong>silent</strong> mutation). The DNA will be
        passed on to future generations and species with no
        discernible effects.</p></li>
<li><p><strong>Benign:</strong> No significant effect on
        the function or efficiency of the resulting protein even though
        a change in amino acid is produced. This requires that the
        mutated amino acid is not essential for structure and/or
        function or that the new amino acid fulfils the same role
        as that which it has replaced. The DNA will be passed on to
        future generations and species.</p></li>
<li><p><strong>Beneficial:</strong> A beneficial effect
        in terms of survival and dominance, the resulting mutated
        protein enhancing both. The mutated DNA will be passed on
        to future generations and species and the line is likely
        to preferentially succeed (<strong>survival of the fittest</strong>). This
        needs to be considered as the dominating effect not only
        because most proteins show only slight variations between
        races of a existing particular species, but because what
        we see now is the result of evolutionary selection.</p></li>
<p><font face="arial,sans-serif,times">A commonly used model is that point (<strong>acceptable</strong>)
        DNA mutations (<strong>PAMs</strong>) occur linearly with time,
        at a rate P of approximately 20 PAMs /
        100 million years. PAMs are those in items 2-4
        above. In this model, two proteins of 200 amino
        acids (coded by 600 bases) which have diverged
        over 500 million years (a total of 1,000 million
        years of evolutionary divergence) would show a
        difference of 200 bases and the two genes would
        be 400/600 % ie 67% <strong>homologous</strong> (identical).
        Translating this in to <strong>amino acid homology</strong>
        (which is in practice
        considerably less than the DNA homology) is not
        straightforward; in the limits 200 base changes
        could produce 200 amino acid changes (0% amino
        acid homology) or they could produce none (100%
        homology). The position and nature of the mutation
        are clearly of paramount importance, but account
        needs to be taken of the degeneracy in the genetic
        code, the possibility of mutation back to a previous
        state, and the restrictions imposed by preservation
        of structure and function (exclude detrimental
        mutations, item 1 above). Basing the PAM model
        on the mutation rate given above (P = 1 PAM /
        5 million years) fails spectacularly when the
        new data given below is considered.</font></p>

<h4><font face="arial,sans-serif,times">The Pentraxins</font></h4>
<p><font face="arial,sans-serif,times">In simple terms, some 500 million years ago
        (at least) an evolutionary divergence in coelomata,
        the ancestor(s) of both chordates (eventually leading
        to vertebrates and humans) and arthropods (leading to,
        for example, spiders and the horseshoe crab) led to
        the establishment of two new evolutionary lines.
        Note that present day man and the horseshoe crab
        represent over 1,000 million years (2 x 500) of
        evolutionary divergence.</font></p><p>

</p><p><font face="arial,sans-serif,times">We have recently shown that the two proteins
        CRP and SAP are both present in the horseshoe crab, while
        others have shown the presence of both in man and in every
        other species in which they have been sought. This is
        consistent with the view that the two proteins arose
        from a common ancestor protein (say CRP) via a gene
        duplication event (creation of a new, additional gene
        by duplication of an existing gene) and that they have
        been evolving and diverging since. It is not consistent
        with the accepted view, arising from using P=1PAM / 5
        million years in the PAM model and from the previous
        assumption that SAP did not exist in invertebrates,
        that the duplication event occurred hundreds of millions
        of years <em>after</em> the divergence of the two evolutionary
        lines which lead to man and the horseshoe crab. The two
        proteins show 51% amino acid homology in man (similar in
        other mammals) and 34% in the horseshoe crab.</font></p>

<h4><font face="arial,sans-serif,times">The parameters and assumptions:</font></h4>
<ul><font face="arial,sans-serif,times"><li><p>The protein SAP was first generated
        from CRP by gene duplication say 500 million years
        ago, at which point (time zero) the two genes and
        proteins (approximately 600 bases and 200 amino acids)
        were identical.</p></li>
<li><p>If we need to define the initial amino
        acid sequence in terms of relative abundance of the
        20 amino acids, this can be reasonably based on the
        numbers of coding triplets (eg. 6 times more Arg than
<li><p>The proteins SAP and CRP have been
        diverging through random point acceptable mutations of
        the coding DNA for 500 million years and are now 51%
        homologous in man and 34% in the horseshoe crab at the
        amino acid level. The rate P of random point acceptable
        mutations is constant and linear with time (current
        thinking suggest P is the same for all species and
        is around one base every 5 million years).</p></li>
<li><p>A certain % of the original amino acid
        sequence of CRP, and hence of the evolved, present
        day CRPs and SAPs, is required to remain constant
        by the constraints of structure and function. A
        reasonable guestimate in terms of amino acids is
        20% in man (the same 20% in both proteins) and 10%
        (again the same 10%) in the horseshoe crab.</p></li>

<h4><font face="arial,sans-serif,times">The problems:</font></h4>
<ul><font face="arial,sans-serif,times"><li><p>In (a) humans (b) the horseshoe crab,
        what is the future steady-state minimum amino acid
        homology (%) between the two proteins SAP and CRP
        (this will be independent of all parameters except
        the constant % of amino acids?).</p></li>
<li><p>Can we now deduce P for both species?</p></li>
<li><p>Can we now determine when (from time zero)
        the minimum homology will occur?</p></li>
<li><p>Can we deduce the % homology between the SAME
        protein (CRP) in man and the horseshoe crab?</p></li>

<h4><font face="arial,sans-serif,times">Generalisation:</font></h4>
<ul><font face="arial,sans-serif,times"><li><p>Can we solve the problems if we generalise
        to any or all of  <em>M</em> million years (since gene duplication),
        3<em>N</em> bases and <em>N</em> amino acids, <em>H</em>% current homology between
        SAP and CRP, <em>x</em>% amino acids required to be
<li><p>Is there a function which describes amino
        acid homology versus time for two diverging proteins,
        and if so what numerical data is required to establish
        this function?</p></li>
When someone makes a mobile phone call while travelling by road, the call has to be handed over from one Base Transceiver Station (BTS) to the next, and the timing of these handovers enables the vehicle speed to be estimated.
We have extracted GSM signalling data from a selected area around Munich during three months last summer in order to detect road traffic congestion information directly from the mobile network. As a result, we obtained noisy velocity-over-time data,  neither equidistant in time nor exact.

The problems are:
# Find a filter that, with the least possible delay, detects sharp declines of the average speed and hence tells us the beginning of a traffic jam.
# Find a filter that detects the return to normal conditions after a traffic jam, also as soon as possible.
# Find a filter to give a reasonable estimate for the possible speed at which one could expect to be able to travel by car during normal or congested conditions. (The speed distribution obtained from GSM data is the sum of the distribution of car speeds and that of truck speeds, and so is often bimodal.) 
This is a sample of the data for illustration:
{{c{Figure 1. Sample Data}}}
Each red dot represents one observed traveller who passed a given road segment with the corresponding speed at the given time. The black line is the result of a rudimentary sample filter, gliding average of the last 32 values. Data for Thursday, Friday and Saturday shows traffic congestion as the travelled speeds drop sharply. Note that during night time there is barely any data while on weekends there is less data than during the week. This should not be confused with standing traffic, however.

Note that the filter shall be used in near-real time while the data is generated. Hence it may not take future data values to compute a result for a given point in time.

The identification of such a filter may help to introduce different traffic information services. I will be glad to take comments and answer any questions.

The complete data spans a trial carried out during three months last summer on approximately 110km motorway around Munich, Germany. It is organized in a set of 46 excel-sheets and will be supplied on CD.
Reconstruction of 3D morphology from optical sectioning of biological objects
Description of task for ESGI workshop 17-21 August ’09 - formulated by Unisensor
Unisensor have developed a new instrument for detecting and analysing microscopic particles in various types of fluid based on a novel technology involving optical sectioning of the sample. Examples of particles to be measured are blood cells, bacteria, sperm cells, and mammal oocytes and embryos (unfertilised and fertilized eggs, respectively). The aim of the task is to reconstruct the 3D morphology of the sample objects from a series of images through different slices of the sample.
The measurement principle
The detection principle is illustrated in figure 1. It consists of a microscope which can be adjusted automatically in the vertical direction. By moving the camera up and down stepwise and acquiring images for each step, the entire sample can be imaged in optimal focus and the 3D information of the sample is thus available for analysis. The contents of the sample can be analysed in numerous ways depending on the nature of the sample. The more advanced analysis requires information about the 3D morphology of the objects.
The technique of sectioning is well known from other similar applications, such as fetal ultrasound scanning, skin tomography and CT and NMR scanning techniques. The most important specific challenge in this case lies in the fact that the lens has a finite depth of field, i.e. parts of the objects within a certain range of the focus plane will be in focus in the same picture. This has the advantage that fewer images are necessary in order to cover the entire sample, but the disadvantage that each image will include a strong component of the neighbouring images in a blurry (out-of-focus) form. Other challenges involve optical artefacts due to spatial and temporal coherence of the illumination source, increasing scattering of
Figure 2: Set of bright field images taken through a mouse Oocyte (). Diameter of Oocyte is 70μm.
Figure 1: Illustration of the measurement principle
70 micron
the illumination light down through the sample (changing point spread function). These issues could
indicate the need for using a complex point spread function, i.e. including both the phase and strength of
the field.
Figure 2 shows an example of the sectioning technique on a mouse Oocyte (an egg cell that is not
fertilized), with 8 images taken at different sections through the Oocyte. The inside of the object clearly
shows a granular structure, which varies through the egg. The first and the last images are blurred,
indicating that the image plane is well outside the object.
Objectives and success criteria
The overall purpose of the task is to develop a method, which can reconstruct the 3D morphology of the
object under investigation from the set of images. Focus is especially on identifying sharp changes in
contrast, e.g. cell membranes and similar objects. Objects are in general assumed to semi transparent, i.e.
weak phase objects.
On the path towards this ultimate goal there are a number of interesting milestones:
- Make computer generated image stack of simple objects (2D slab, sphere, cube, or similar). These
images can be used as inspiration for the first version of reconstruction routine.
- Make some observation of the uniqueness of the solutions and consider/discuss potential solutions to
improve the 3D reconstruction
- Reconstruct 3D morphology from computer generated images of objects like cubes and spheres etc.)
- Compensate images for complex point spread function. This is mainly important for lenses with high
numerical aperture, i.e. small depth of field (and high optical resolution)
- Develop a method to separate focus information from out-of-focus information possibly by comparing
two neighbouring images.
- Apply the reconstruction routine on experimental acquired images of
o well defined 2D patterns
o Glass sphere clusters
o White blood cells
o Mouse egg cell
Available information and data
There is a large amount of background
information and actual measurement data
available. Among the most important of
these should be mentioned:
- Complex point spread function for all
involved optical systems
- Measurements on objects with wellknown
and well-defined 3D
- Measurements on optical systems
with low depth of field (makes it easier to separate focus information from out-of –focus information.
- Measurements on optical systems with low depth of field
Figure 3: Contour plot of the irradiance distribution for the predicted point
spread function for an diffraction limited optical system. u is a reduced axial
coordinate (depth) and v is reduced lateral coordinate. See Born & Wolf.
The predictive quality of climate models can be enhanced by incorporating information about temperatures from the past. A number of methods have been developed to determine the ancient temperatures of the upper ocean, and one of these is based on the use of deep sea micro fossils.

For many millions of years a large number of species of the invertebrate group planktic foraminifera have lived in the upper water level of the world's oceans. These organisms produce little shells of calcite (CaCO3) that function as a skeleton. Without changing the isotope ratio of oxygen in the dissolved CO2 in the ambient water, the water temperature has a direct influence on the isotope composition in the calcite shells of the plankton. If one would know the isotope composition of the ocean water, one could hence deduct the ocean water temperature from the isotope composition in the calcite shells. The isotope composition of the ocean water from ancient times is practically unknown, however, and, for theoretical reasons, it's not advisable to try to model it either. One way to get around this problem is to take more than one species of plankton:

The different species of plankton don't prefer the same ecological conditions. Some are adapted to live under colder conditions than others. One may hence in principle infer absolute temperature differences from plankton that has lived during the same time-span in the same water level in the same region: the isotope ratio of the water remains fairly constant during relativey short time-spans, but temperatures differ considerably both regionally (see figure 1) and in time. Information could thus be obtained about both average temperatures in a given era, and of the variability of these temperatures. The variability is quite large, and therefore interesting to know. Other methods to determine ancient sea water temperatures only obtain mean temperaturs, but the NIOZ is interested to obtain variances as well.
{{c{Figure 1: Sea surface temperatures in the Arabian Sea.}}}
{{c{Figure 2: A living foraminiferum. //J. Bijma (AWI-Bremerhaven)//}}}

Results obtained so far indicate that the isotope composition in the calcite shells are not only determined by the temperature of the water in which the plankton lived, but also by other ecological influences. For instance, food availability also has its influences on the relative abundance of the different species of foraminifera. It is hence difficult to produce direct conclusions from the isotope composition data from the fossil calcite shells.

The challenge now is to construct a model that encompasses the ecology of the organisms, that can still be used to infer the temperatures of the past from the isotope composition data found in the deep sea sedimentary record. It will be very easy to make this model extremely complicated, considering the number of side effects involved. How can we reduce this to a reasonable model that still simulates enough of the observed phenomena?
|Requires|~TW2.1.x, TinyMCE 2.1.x|
|Browsers|Firefox 2.0.x|
Integrate tinyMCE richtext editor in TiddlyWiki.

After installation, tag a tiddler as richText and edit it.

# Download tinyMCE from http://tinymce.moxiecode.com/download.php and unzip it somewhere. By default, tiny_mce.js script path should be tiny_mce/tiny_mce.js (so in a tiny_mce subdirectory).
# Import the RichTextPlugin tiddler and tag it as a systemConfig.
# Add txtTinyMCEPath and txtRichTextTag in advanced options panel, or use the following. 
**<<option txtTinyMCEPath>> TinyMCE Path (relative or absolute)
**<<option txtRichTextTag>> taggled tiddler will edit in richText mode by default
# Adapt this option values to your needs.
# Save and reload your tiddlywiki.
# __Optionnally__, you can add a richText macro to the EditTemplate toolbar. It adds a button to switch on and off the richText editor.
# __Optionnaly__, edit tiny_mce\themes\simple\css\*.css files to adapt the look and feel in rich text editor.
# tag a tiddler as richText (or the value you put in the options) and edit it. Have a fine job !

*20-03-07: ver 1.0

config.commands.saveTiddler.richTextPreviousHandler = config.commands.saveTiddler.handler;
config.commands.saveTiddler.handler = function (event,src,title) {
 if (typeof tinyMCE!= "undefined") config.commands.richText.RichTextMode("off",title);
 return config.commands.saveTiddler.richTextPreviousHandler(event,src,title);

config.commands.cancelTiddler.richTextPreviousHandler = config.commands.cancelTiddler.handler;
config.commands.cancelTiddler.handler = function (event,src,title){
 if (typeof tinyMCE!= "undefined") config.commands.richText.RichTextMode("off",title);
 return config.commands.cancelTiddler.richTextPreviousHandler(event,src,title);

config.commands.deleteTiddler.richTextPreviousHandler = config.commands.deleteTiddler.handler;
config.commands.deleteTiddler.handler = function (event,src,title){
 if (typeof tinyMCE!= "undefined") config.commands.richText.RichTextMode("off",title);
 return config.commands.deleteTiddler.richTextPreviousHandler(event,src,title);

config.commands.editTiddler.richTextPreviousHandler = config.commands.editTiddler.handler;
config.commands.editTiddler.handler = function (event,src,title){
 var res = config.commands.editTiddler.richTextPreviousHandler(event,src,title);
 if (store.getTiddler(title).tags.contains(config.options.txtRichTextTag)) 
 if (typeof tinyMCE!="undefined") config.commands.richText.RichTextMode("on",title);
return res;

merge(config.options,{txtTinyMCEPath : "tiny_mce/tiny_mce.js", txtRichTextTag : "richText"},true);

config.commands.richText = {
 handler : function (event,src,title){
 this.RichTextMode("switch", title);
return false;
 text:"Richtext (on/off)",
 tooltip:"write it in rich text",
 matchingRules : { // global definition to avoid defintion in a recursive function
 strong : {wikIn : "''", wikOut : "''"}, 
 em : {wikIn : "//", wikOut : "//" }, 
 u : {wikIn : "__", wikOut : "__" }, 
 strike : {wikIn : "--", wikOut : "--" }, 
 p : {wikIn : "\n", wikOut : "" }, 
 br : {wikIn : "\n", wikOut : "" }, 
 li : {wikIn : "\n", wikOut : ""}, 
 ul : {wikIn : "", wikOut : ""}, 
 ol : {wikIn : "", wikOut : ""}
 tinyMCELoad :function (){
 var scriptElement = document.createElement("script");
 scriptElement.src = config.options.txtTinyMCEPath;
 scriptElement.type= "text/javascript";
 scriptElement.language = "javascript";
 tinyMCEInit : function () {
 if (typeof tinyMCE== "undefined") window.setTimeout("config.commands.richText.tinyMCEInit()",100);
 else tinyMCE.init({mode : "none", theme : "simple", gecko_spellcheck : "true", strict_loading_mode : true}); //wait script is loaded
 WikiToHTML : function(myString){ //Convert Wiki code to HTML code
 for (tag in this.matchingRules){
 if ((this.matchingRules[tag].wikIn==this.matchingRules[tag].wikOut)&&(this.matchingRules[tag].wikIn!="")){ // format delimiters
 lines = myString.split("\n");
 var ul=0; ol=0; ulLevel=0, olLevel=0;
 for (cpt=0; cpt<lines.length; cpt++){

 olLevel=(/^#+/.exec(lines[cpt])||"").toString().length; // count # at line begining
 ulLevel=(/^\*+/.exec(lines[cpt])||"").toString().length; // count * at line begining
 lines[cpt]=lines[cpt].replace(/^#+/,""); // delete wiki symbols before replacing with equivalent HTML
 lines[cpt]=lines[cpt].replace(/^\*+/,""); // delete wiki symbols before replacing with equivalent HTML
 if (ulLevel||olLevel) lines[cpt]="<li>"+lines[cpt]+"</li>"; // wiki line are paragraphs or list items
 else lines[cpt]="<P>"+lines[cpt]+"</P>"
 if (ulLevel>ul) lines[cpt]=Array(ulLevel-ul+1).join("<ul>")+lines[cpt]; // list open tags match wiki symbol count changes, here unordered list. Use Array.join(+1) to repeat string.
 if (olLevel>ol) lines[cpt]=Array(olLevel-ol+1).join("<ol>")+lines[cpt]; // list open tags match wiki symbol count changes, here ordered list. Use Array.join(+1) to repeat string.
 if (ulLevel<ul) lines[cpt]=Array(ul-ulLevel+1).join("</ul>")+lines[cpt]; // list close tags match wiki symbol count changes, here unordered list. Use Array.join(+1) to repeat string.
 if (olLevel<ol) lines[cpt]=Array(ol-olLevel+1).join("</ol>")+lines[cpt]; // list close tags match wiki symbol count changes, here ordered list. Use Array.join(+1) to repeat string.
 ul=ulLevel; ol=olLevel;
 res = lines.join("");
 if (ul+ol>0) for(var cpt=0;cpt<ul+ol; cpt++) res+="</ul>" //if list item is last line, must close tags here
 return res;
 HTMLNodeToWiki : function(myNode, ol, ul, last){//Convert HTML code to Wiki code
 if (myNode.nodeType==3) return myNode.textContent; // final node level = text
 var ol=ol||""; var ul=ul||""; var last=last||"ul"; var res = "";
 var nName = myNode.nodeName.toLowerCase();
 switch (nName) {
 case "ul" : ul+="*"; last = "ul"; break; //increase list level
 case "ol" : ol+="#"; last = "ol"; break; //increase numerical list level
 case "li" : res = eval(last); //apply list level to wiki code
 for (var cpt=0; cpt< myNode.childNodes.length; cpt++)
 res += this.HTMLNodeToWiki(myNode.childNodes[cpt], ol, ul, last); // convert children recursively
 if (this.matchingRules[nName]) // then HTML element has wiki equivalent
 res = this.matchingRules[nName].wikIn + res + this.matchingRules[nName].wikOut;
 return res;
 ReplaceWithTag : function(myString, lookFor, tagForOpen, tagForClose){ // replace tag alternatively with tagForOpen and tagForClose
 var stringArray = myString.split(lookFor);
 var res=stringArray[0];
 for (var cpt=1;cpt<stringArray.length;cpt++){
 if (cpt%2!=0) res=res+tagForOpen+stringArray[cpt];
 else res=res+tagForClose+stringArray[cpt]; 
 return res;

 RichTextMode : function (mode, title){
if (typeof tinyMCE== "undefined") return false;
 var editorID="richtext"+title;
 var area = document.getElementById("tiddler"+title).getElementsByTagName("textarea")[0];
 var statut = (tinyMCE.getInstanceById(editorID)!=null);

 if (mode!="on" && mode !="off") mode="switch"; // only accept "on", "off" or "switch" (default value)
 if (statut && (mode=="switch")) mode="off"; 
 if (!statut && (mode=="switch")) mode="on";

 if (mode=="on" && !statut){
 if (mode=="off" && statut){
 var myDiv = document.createElement("div");
 area.value = this.HTMLNodeToWiki(myDiv).replace(/\n/,""); // remove unuseful first carriage return

if (navigator.userAgent.indexOf('Gecko') != -1) config.commands.richText.tinyMCELoad(); 

In radiological exposure, risk is the probability of a serious health effect as a result of that exposure. This is proportional to the expectation of annual dose. The aim of probability safety assessment is to estimate the estimated dose as a function of time, however there is no expert opinion of the PDF of the dose information on which comes from the observed distribution. Errors arise in estimations when this is highly skewed.

The problem is to estimate this error given various assumptions on the form of the distribution.

!!Part 1: Roll Coating Technology
Thin film coating (10-20 micron) is applied on a sheet of glass. A sketch of a coater setup is shown below (reproduced from Bürkle GmbH). The small roll left (doctor roll) and the large roll (applicator roll) are pressed to each other. A pool of liquid (viscosity of 1-4 mPa.s) is present between these two rolls on the upper side (red colored). The applicator roll has a deformable rubber cover. The rubber cover is gravured, usually with continuous grooves of 50-200 micron (width and depth). The sheet of glass can move in the same direction (forward roll coating) or in the opposite direction (reverse roll coating) relative to the roll at contact position.
Operational parameters are: pressure of applicator roll on glass plate, pressure of applicator roll on (incompressible, smooth) doctor roll, roll speed, slip speed between applicator roll and glass plate, groove dimensions, groove shape.
''Photovoltaic Systems Coating Solutions''

The coating always has an on-set and an off-set: there is always a “first touch” and “last touch” moment of the applicator roll with the glass. The grooves are deformed during contact with the glass and will regain its shape at the end of the contact close to the moment of liquid film splitting. We have experienced that the film thickness at the beginning and at the end of the glass plate can differ from the steady-state film thickness.
''Question 1:'' Is it possible to have a (qualitative) physical/mathematical description of the on-set and off-set film length and thickness for a deformable, gravured roll depending on the operational parameters.
''Question 2:'' What is the (initial) shape and thickness of the liquid film on the glass plate at different operating conditions?
''Additional information:''
1. R.W. Hewson, N. Kapur, P.H. Gaskell, A theoretical and experimental investigation of tri-helical gravure roll coating, Chemical Engineering Science Vol. 61, Issue 16 (2006), pp. 5487-5499
2. C. A. Powell, M. D. Savage and P. H. Gaskell, Modelling the Meniscus Evacuation Problem in Direct Gravure Coating, Trans IChemE, Vol 78, Part A, January 2000
3. M. J. Gostling, M. D. Savage, A. E. Young and P. H. Gaskell, A model for deformable roll coating with negative gaps and incompressible compliant layers, J. Fluid Mech. (2003), vol. 489, pp. 155–184.
4. J. P. Mmbaga, R. E. Hayes, F. H. Bertrand and P. A. Tanguy, Flow simulation in the nip of a rigid forward roll coater, Int. J. Numer. Meth. Fluids 2005; 48:1041–1066
5. P. H. Gaskell, G. E. Inne and M. D. Savage, An experimental investigation of meniscus roll coating, J. Fluid Mech. vol. 355 (1998), pp. 17-44.

!!Part 2: Evaporation and colloid precipitation
The wet thin film coating (see above) consists of an organic solvent and colloidal solid particles (10 – 100 nm; 1 – 5 vol%). The wet thin film coating is subjected to a gas flow parallel to the glass plate resulting in evaporation of the solvent and formation of a solid thin film (wet gel). During evaporation of the solvent a moving front between the wet and dry part is seen. We experienced that the solid coating properties may vary depending on colloid properties and drying conditions. Sometimes the dry coating is uneven because of an uneven wet film thickness, which could not level fast enough.
''Question 3:'' Is it possible to give a physical/mathematical description of the solid coating (relative) density as a function of particle size (distribution) and operating parameters (initial solid concentration, solvent evaporation rate, initial liquid film thickness)?
Question 4: What initial uneven wet film will result in a visible uneven dry film after solvent evaporation at different drying conditions and initial film conditions (see part 1)?
''Additional information:''
6. Christine M. Cardinal et al, Drying Regime Maps for Particulate Coatings, AIChE Journal, Vol. 56 (2010), 2769
7. Alexander F. Routh, William B. Zimmerman, Distribution of particles during solvent evaporation from films, Chemical Engineering Science 59 (2004) 2961 – 2968
8. Stergios G. Yiantsios, Brian G. Higgins, Marangoni flows during drying of colloidal films, Physics of Fluids 18 (2006) 082103
9. Robert W. Style and Stephen S. L. Peppin, Crust formation in drying colloidal suspensions, Proc. R. Soc. A January 8 (2011) 467:174-193;
Ohessa tiedoksesi muutama kuva, minklaista dataa Percostation tuottaa: siinä on päällimmäisenä dielektrisyys (kosteuden fuktio ja keroo kun maa on jäässä), sitten keskellä sähkönjohtokyky (kertoo onko maa jäässä ja miten kolloidit ovat mobiloituneet) ja alla on eri antureiden lämpötilat + ilman lämpötila.


Ideana on näistä laskea takaisinpäin ennustemallia tien sulamisesta, olloin sääennusteen perusteella voitaisiin arvioida, miten maa sulaa ja kuivuu ja voidaanko painorajoitukset poistaa.

Timo Saarenketo, M.Sc.
managing director
Roadscanners Oy
P.O.Box 2219, FIN-96201 Rovaniemi
p: +358 (0)16 4200 521
f: +358 (0)16 4200 511
m: +358 (0)50 5430 021
e-mail: timo.saarenketo@roadscanners.com
* 2010, Oct 18-21: Moscow (Russia). [[RSGI 1|RSGI 1]]
* 2011, Sep 19-23: Moscow (Russia). [[RSGI 2|RSGI 2]]
Posed by: Jan Teuber for Amfitech Aps.

This problem aims to investigate scanning of dielectric objects using stationary (electrostatic) fields.

Consider a standard plate capacitor with a voltage `V=Q/C` across the plates, where `C=\epsilon_0 A/d` is the capacitance of the system.
If now a dielectric object `E` is inserted between the plates,
the capacitance (or, equivalently charge and/or potential) will change in a way characteristic of the dielectric and geometric properties of the object `E`. For instance, for a homogeneous sphere of radius `R` made from a material with dielectric constant `K`, inserted between plates so large that boundary fields can be ignored, a surface element containing a charge `Q` on one of the plates will experience a decrease in potential equal to
$$ \Delta \varphi = R^3\frac{K -1}{K + 2} Q/r^4 $$
If we correspondingly replace the pair of simple capacitor plates with a pair of plates each of which is actually subdivided into a number of smaller segments (e.g., in a checkerboard pattern),
such that separate voltages can be applied to each individual segment, a number of voltage constellations between the upper and lower plates is now possible. Each one of these will provide information about the object `E` (held stationary through all the voltage combinations).

The problem posed to the Study Group is to provide an estimate of the amount of geometric and dielectric information about `E` that may be obtained by this procedure.

The setup is to be considered as quite general, and various other geometries (of the capacitor part of the circuit) is also of interest.
Air-conditioning scroll compressors are manufactured by the million today and get greater and greater market shares from traditional (reciprocating) single piston compressors.

The scroll compressor consist of two plane spiral/helix running inside each other. Normally both scrolls are the same unrolling circle involute with constant wall thickness. The compression chambers are therefore thin, oblong, and bent. Often the one scroll is fixed and the other is orbiting.

The scroll compressor has its suction port at the periphery and discharge in the spiral centre. No suction valve is needed and a discharge valve is only present to decrease power consumption a little bit.

The scroll compressor has a rather constant speed of compression chamber decrease and thus a harmless almost constant torque loading of partly the electric motor and partly the housing (action and reaction).

Consequences of the scroll geometry are long gas leakage passages and high material temperatures in the scroll centre.

Different movements of scrolls are topics of today: Co-rotating and co-orbiting scrolls.

The task could be: Chose one scroll geometry and movements of both scrolls. Find the other scroll geometry (One scroll is envelope of the other). Scroll wall thickness could be a variable too. Make investigations and see how compressor performance/efficiency is influenced. Finally, find sensibilities and optimise design if possible.
Vehicle Research Corporation


<p><font face="arial,sans-serif,times">The science of aeronautics has for the past half-century
proceeded under the assumption that shock waves and the sonic boom are
laws of nature.  The limitations on the economy and environmental acceptance
of supersonic aircraft imposed by these characteristics has limited civil air
transport to subsonic speeds, which is too slow for global travel.</font></p>

<p><font face="arial,sans-serif,times">The basis for the cited assumption is the further assertion,
universally presented in the literature, that an airplane normally flies at
a constant speed in a constant energy atmosphere, which has led to discarding
the right hand term of Crocco's equation, leaving a special case where entropy
production of the first term by shock waves is a necessary condition.</font></p>

<p><font face="arial,sans-serif,times">Vehicle Research Corporation (VRC) has explored the possibility
of developing shock-free supersonic flight with no sonic boom, and has
formulated a new mechanism based on restoring the right hand term of
Crocco's equation, enabling elimination of the dissipative first term and
its shock waves.  This VRC technology employs a nozzle shaped wing-underside
to generate an array of weak compression waves from its forward section.
An underwing planar jet of engine compressor air is added which intercepts
and reflects the compression waves back to the upwards reflexed aft wing
underside to provide an increased pressure having lift and thrust components,
thereby recovering the compression energy into useful work.  This vertical
energy gradient provided by the underwing jet apparently is a necessary
condition for shock-free flight.  The sufficiency condition presumably
is angular momentum conservation, which is satisfied by the vortex sheet
generated on the upper surface of the underwing jet.</font></p>

<p><font face="arial,sans-serif,times">This formulation is constructed of four modelling
elements that could benefit from a critical review to validate their
mechanisms and quantify their performance projections, as follows:</font></p>

<ol type="1"><font face="arial,sans-serif,times"><li><p>Conservation of angular momentum is used to project a reaction mechanism for</p>
        <ul><li>subsonic flight - the well known starting vortex/upstream bound vortex system,</li><li>supersonic flight - two downstream wake reaction mechanisms,</li><li>shock wave system generating a stack of vortex sheets (WP493), or</li><li>shock-free system using an underwing jet to generate a single sheet.</li></ul>
        <p><strong>Discussion and Questions:</strong> We describe the well-known
        subsonic upstream bound vortex mechanism simply to provide an example
        and lay a foundation for our claim that the same circulation theorem
        used at subsonic speeds applies to supersonic flight as well, but with
        the limitation that the supersonic reaction mechanism must take place
        in the downstream Mach cone because of the speed of sound restrictions
        on pressure transmission.  We show that the shock and expansion waves
        intersect in the downstream wake to generate a stack of vortex sheets
        above the wing rotating in the same direction as the wing circulation,
        and a stack of vortex sheets below the wing that rotate in the correct
        opposite direction to comprise the necessary angular momentum reaction
        to both the wing circulation and the adverse rotation above the wing.
        This difference reaction mechanism partly explains the high drag of the
        shock wave system.  These calculations explain why we have shock waves,
        and also show that there is a second mechanism that can provide the
        necessary reaction by using an underwing jet to generate a single strong
        vortex sheet to provide the angular momentum reaction and thereby avoid
        shock waves.  The wing upper surface should also be flat and mounted at
        zero angle of attack to avoid generating an adverse upper surface
        reaction.  Is this logic correct and are the two reaction mechanisms
        valid?  Note that Crocco's equation shows that there are two mechanisms,
        and only two.  Is the new underwing jet reaction mechanism valid?  If
        it is, how can we quantify and strengthen the model?  If it is not
        correct, how do we modify it without losing the idea?</p>
<li><p>Underwing jet mixing mechanism to grow the single
        reaction sheet, employing acoustic excitation, and controlled
        <p><strong>Discussion and Questions:</strong> High pressure air,
        taken from the engine compressor, is used to provide the high
        velocity underwing planar jet, discharged from the underwing
        manifold mounted perhaps a foot below the wing undersurface.  The
        interface between this high velocity jet and the adjacent slower
        gap flow will generate a single strong vortex sheet to provide
        the required angular momentum reaction to the lifting circulation.
        Further, considerable experimental evidence is at hand demonstrating
        that this vortex sheet can be grown in thickness in the downstream
        direction, even at supersonic speeds, by controlled use of upstream
        acoustic excitation to provide resonance.  This vortex sheet will
        also intercept the wing generated compression waves and reflect
        them back to the upwards reflexed aft wing undersurface, recovering
        their energy into useful work.</p>
<li><p>Compression wave reflection from the underwing
        jet/vortex sheet, intercepting and reflecting these waves
        back to the upward reflexed wing backside, is used to convert
        this wave energy into pressure, providing lift and thrust
        components, recovering this wave energy into useful work.
        The mechanism at the same time acts as a shield, preventing
        the compression waves from being transmitted downward,
        dissipated into heat, and causing the sonic boom.</p>
<p><strong>Discussion and Questions:</strong> The fundamental question
        is: How much of the compression wave energy is reflected by
        the underwing jet/vortex sheet and recovered, and how much
        is transmitted and lost?  This question has two levels: is
        there a fundamental limit to the energy reflection and recovery,
        and if not how do we maximise this recovery?  VRC has made a crude
        preliminary calculation of these quantities (James, WP 315).
        Lockheed, working with us, and attempting to buy our program,
        made a crude CFD calculation of this reflection.  And we conducted
        a preliminary wind tunnel test at Ohio State University (OSU) under
        a DARPA contract, measuring the reflection.  But we are seeking a
        better formulation.  Limitations on reflection due to available
        energy are also required.  The problem of course is compressible,
        non-linear, 2-D fluid mechanics.</p>
<li><p>Sonic boom generation.  The underwing jet system will replace the shock
        wave system in provision of the required vortex sheet reaction.
        The compression wave energy will be recovered by the reflection
        mechanism.  The underwing jet/vortex sheet array will continue
        aft of the wing trailing edge as an array of vortices, comprising
        a vortex flap, similar to a jet flap, inclined downwards at first
        while in the Mach cone of the wing undersurface, and thereafter
        curved back to the horizontal by the outer flow.  But at supersonic
        speeds any forces or pressures generated  by this downstream vortex
        flap cannot affect the upstream wing.  Hence the wave energy recovery
        and performance of the wing will not be changed by anything that may
        occur downstream.  Further, the vortex flap, due to its extensive
        mixing, will have a reduced velocity, probably subsonic with respect
        to the outer flow, and hence will not generate any compression waves.
        However, the form of the vortex flap, having a downward inclination,
        and travelling with the same speed as the wing, might be expected to
        generate a further array of compression waves.  But this cannot happen,
        because angular momentum has already been satisfied, and no further
        uncompensated turning can occur.  The answer to this dilemma probably
        is that the external irrotational field of the vortices in the vortex
        flap generate an aft perturbation flow which expands and cancels the
        otherwise coalescing compression waves, thereby preventing any
        transmission of shock waves to the ground to cause a sonic boom.</p>
<p><strong>Discussion and Questions:</strong> Is this explanation of the sonic
        boom question correct?  If not, can it be repaired?  If it is correct,
        a quantitative formulation would be highly desirable.</p>
<<closeAll>><<permaview>><<newTiddler>><<saveChanges>><<fontSize "font-size:">><<slider chkSliderOptionsPanel OptionsPanel 'options »' 'Change TiddlyWiki advanced options'>>
&mdash; the problem statements from the past Mathematical Study Groups
Mathematical Study Group problems
In firing Ink Jet printheads, we have observed from time to time that it is possible to create a drop much smaller than the nozzle. In normal operation of Xaar Printheads, the drop ejected is first extruded through the nozzle by a positive pressure. This results in a forward moving plug having a certain momentum and of the diameter of the nozzle. At the end of a time period (typically between 2 and 10usec, according to detail design), the pressure is reversed and a neck begins to form at the nozzle. The momentum of the plug carries it forward, extending the neck to eventually become a long ligature, which finally breaks off at the nozzle. Break-off occurs between 10 and 100usec after the pressure reversal, and this time is easily predicted with a slight modification to the Rayleigh break off criterion.

However, under certain circumstances, not really known in the sense of my being able to repeat them, we have seen a quite different behaviour in which a much smaller drop - eg 1/4 dia of the nozzle emerges rather suddenly from the centre of the nozzle and flies off at typically 2x the normal firing velocity. We believe that this is associated with very short pressure pulses - of the order of 1usec - but usually it happens as a result of reflected acoustic waves and not as a result of controlling the applied pressure - hence the difficulty of reproducing the circumstances.

In a paper "Micro-machined Acoustic-Wave Liquid Ejector" Journal of MicroMechanical Systems, Vol 10 No 3, Sept 2001, is described a focussed acoustic wave technique which gives rise to a droplet in a similar fashion from the centre of a free surface. I believe this may be relevant, though there is no actuator element in our system corresponding to the one described.

Our actuator consists of a piezo-electric channel of a defined length which squeezes the ink and creates a nominally plane acoustic wave which impinges on the nozzle. Any focussing which takes place must be due entirely to the shape of the nozzle.

I believe that radial acoustic waves are dispersive, and it might be that we accidentally create a soliton, which propagates radially within the nozzle causing the energetic ejection. Normally such waves would disperse, hence my suggestion that a soliton may be involved.

Really, this exhausts my current knowledge on the subject (and I do not understand solitons!). I can describe our current actuator in more detail, but I feel that it is probably better at this stage to keep the minds open and allow an element of imagination to take charge!
Being recorded at the skin, a surface electromyographic signal (sEMG) reflects the electrical activity of an underlying muscle (or group of muscles). Hence, sEMG offers a fairly simple, non-invasive way to assess the activation of superficial muscles. The signal is an integral measure summing action potentials of many motor units, i.e. groups of muscle fibers that are innervated by a single nerve fiber. The biophysics involved in the production of the motor unit action potential can be considered well understood and there is wide accord regarding basic ingredients for a corresponding modeling.

Mathematical models are typically designed to support the application of sEMG in fields like ergonomics, biomechanics, and kinesiology. They primarily serve for identifying which muscles are involved in certain performances, determining the strength and timing of muscle activity, or monitoring the muscle's physiology during different activities. In this sense models are used to relate physiological and anatomical parameters to global variables like mean intensities and power spectral distributions, that is, coarse-grained variables of sEMG that are assumed to reflect a muscle's overall state. Recently developed, rather sophisticated techniques, however, allow for much more fine-grained experimental approaches: high-density sEMG provides a detailed spatial resolution by using large arrays of electrodes when recording muscle activity and gives thus the opportunity to analyze the development of the spatially extended activity of the muscle. The corresponding spatiotemporal patterns display both traveling solitary waves along the direction of muscle fibers and diffusive spreading of electric activity either due to cross-talk between neighboring muscle fibers and/or because of volume conduction (passive conduction through the surrounding tissue).

The question arises to what extent traveling waves and diffusion patterns can be pinpointed given fundamental physiological and anatomical properties. If such a relation can be deduced, then one can continue asking how measurement conditions and models should be adapted to maximize the information content of the parameters in these models.
Also available as a [[pdf-file|p/esgi/52/muscles.pdf]] (with pictures and references). 
The number of automotive manufacturers to make use of speech recognition systems within their products is steadily increasing, along with the number of applications. The interfaces to many of the ancillary controls could be based around speech recognition, assuming the speech engines were accurate enough to produce acceptable levels of reliability. Climate control, telephone, navigation system, radio and PDA (personal digital assistant) could, and will be, voice controlled. However, in many vehicles, the background noise levels associated with vehicle operation significantly reduce the reliability of the systems, producing unacceptably levels of incorrect identification.

In order to test speech engines, both for algorithm development and product selection, it is important to be able to test the systems with representative levels of background noise. This noise must represent a wide variety of sources, from engine and wind noise to rain drumming and passing trucks, under a similarly diverse range of operating conditions. As such, the process of testing a system rigorously under actual measured conditions becomes extremely onerous.

The purpose of this project is to identify a suitably compact, artificial signal that is short enough to allow extensive testing of the speech engines but remains representative of the in-vehicle operating environment encountered. To achieve this the signal must contain all the significant characteristics of the vehicle environment, including level, transience and spectral balance such that, statistically, it will represent the actual noise floor presented to the speech recognition algorithm over the vehicles operating envelope.

The question is how to identify the significant features in the background noise and how they should be incorporated into the artificial signal to ensure a statistically valid test signal is achieved.
with NCBES.

Some biomedical applications require cells to be grown on a substrate and then detached into suspension. One choice of material for the substrate is a thermosensitive polymer. At physiological temperatures the polymer is dense and hydrophobic, allowing easy cell attachment. Below this temperature the polymer undergoes a phase transition to a hydrophilic state, absorbing water, dissolving, and promoting cell detachment and suspension.

Two methods of substrate preparation are used: casting and spin-coating. In casting, polymers are precipitated onto a surface by evaporation of their solvent (ethanol). In spin-coating, a layer of polymer and solvent is spin-coated onto the surface and dried. Spin-coating the polymer makes it a good substrate for cell growth, rivalling standard substrates (which cannot easily release the cells), while cast substrates are an order of magnitude worse. Spin-coating has the additional benefit that it allows much thinner substrates to be created. The thickness limit for casting is about 1 micron while for spin-coating it is about 10 nanometres.
* How do the properties of spin-coated substrates differ from cast substrates?
* How might those differences account for the different biological properties of the two substrates?

It is undesirable for the polymer to dissolve during the cell detachment phase, since it is considered a contaminant in cell suspensions. To prevent this a crosslinker can be added to the polymer and activated by curing under UV light. This has no effect on the biological properties of a spin-coated polymer layer less than 20nm thick but the biological properties of thicker spin-coated layers become comparable with cast polymers.
* Why does crosslinking reduce the bioactivity of the substrate?
* Why is this effect not seen in very thin substrates?
The Bourdon bell and the skewed tower of the old church in Delft

The first parish church of Delft, the old church, was built around 1200. In front of the church a 75 meters high tower, with brickwork spire and four turrets, was built in 1350. Even during its construction, the tower was plagued by subsidence. This could be because the water in the Oude Delft had to be redirected to make way for the existing church. The tower therefore was probably built on a filled-in canal. Throughout the ages, the leaning tower has been the cause of considerable alarm to many an inhabitant. The tower leans 1.20 meters to the west and 1 meter to the north.

Two unique bells hang from a heavy oak bell cage in the fourth loft in the tower of the Oude Kerk (Old Church). These are the Trinitas bell dating from 1570 and the Laudate bell dating from 1719. The Trinitas bell, or Bourdon bell, is the most exceptional of the two, weighing almost nine tonnes (!). The Bourdon can still be heard each day, although somewhat modestly, when a hammer chimes the hour and half-hour. The Bourdon is only rung on very special occasions such as,  for example, the funeral of a member of the Dutch royal family. The powerful chime of the Bourdon causes such heavy vibrations that regular use could damage the monument.

In this problem for the studygroup, we try to model the effect of ringing the extremely heavy bells inside the leaning tower with mathematical methods. Is it possible to analyze the effect of ringing on the stability of the tower and on the occurance of damages in the tower ?  Is there resonance ?
Modeling a skew beam with a heavy pendulum can be mathematically interesting, because of the possibility of a chaotic behavior. But is this model too simple ? Should one include damping ? How does one model ancient stone constructions ?

To be able to work on this problem, it is necessary to collect detailed information about the old church (height, width, weight of bells, position of bells, other relevant issues). It is known, for example, from scaled drawings of the construction, that the backside of the tower is heavier than the leaning fore side (more stones are used in the backside part). Also the heaviest bell is placed on the backside.

The body of the tower has four floors and a thirty meters high spire, in which again four floors. The lowest tower room can be reached via a short porch, built after 1500, the now closed passages left and right to the aisles are still recognizable. It is obvious that these were made when the tower was already leaning over.
In the northern aisle is a spiral staircase. The first floor accommodates the large wrought-iron clockwork (1605); it is out of order now. The much smaller clockwork that took over in 1885 is also out of order and is now placed inside the old one. The present clockwork is placed at its appropriate place near the clockfaces. In the northwestern corner of the tower, at second floor level, is a charter chamber with an old iron door. It is said that here Balthasar Gerards, the assassin of Prince William of Orange, was locked away.  The third floor only heightens the tower. After all, it was important to be able to place the bells as high as possible. The fourth floor is the ´klokkenzolder´ (bell attic). Here stands a robust oak frame from the 16th or 17th century, in which the bells rest.

!More information

Photo of the skewed tower [img[p/esgi/48/img/Pict017.jpg]]
A close up of the skewed tower [img[p/esgi/48/img/Pict022.jpg]]
At this image you will find some (repaired) cracks on the wall [img[p/esgi/48/img/Pict020.jpg]]
Drawings of the architectures of the old church:
with Top Tier Irish Bank

The evolution of structured products in Ireland has become increasingly complex in recent years. This is due to the volatile nature of the markets and the altering requirements of today's investors.

For the purpose of valuation, structured products are generally replicated with simpler instruments. It is not possible to break all products down into simple components. In cases where the structured product has to be depicted as a combination of instruments which are themselves complex in nature and thus difficult to valuate and to hedge on the capital market, mathematical and statistical procedures must be developed in order to valuate the new innovative products and assess the risks involved.

The Study Groups aim is to:
# Examine the banks offering of new innovative structured products which have been developed in a low interest rate environment and are designed to meet distinct investor profiles.
# Inspect the products susceptibility to volatile markets.

!General Rules
.c {display:block;text-align:center; font-size:small}
img {display: block; max-width: 100%; margin-left: auto; margin-right: auto;}
.tags {display:none}
 color: #06c;
 text-decoration: none;
 background: transparent;

 background: transparent;
 text-decoration: underline;
 color: #147;

body {
 font-size: 10px;
 font-family: 'Lucida Grande', Verdana, Arial, Sans-Serif;
 background-color: #d5d6d7;
 color: #333;
 background: #e7e7e7;
 margin: 0 auto;

 background: transparent url("kubrickbg.jpg") repeat-y 0px 0px;
 border: 0;
 margin: 0 auto;
 width: 1024px;

!Header Rules
 margin: 80px auto 0em;
 padding: 0;
 width: 745px;
 text-align: center;
 color: #fff;

 font-weight: bold;

.siteTitle a, .siteSubtitle a{
 color: #fff;

.siteTitle a:hover, .siteSubtitle a:hover{
 text-decoration: underline;

 display: block;
 margin: .5em auto 1em;

.header {
 background: url("kubrickheader.jpg") no-repeat bottom left; 
 margin: 0;
 padding: 1px;
 height: 198px;
 width: 1024px;

!Footer Styles
#contentFooter {
 text-align: center;
 clear: both;
 background: url("kubrickfooter.jpg") no-repeat bottom;
 border: none;
 padding: 2em;
 height: 3em;

!Sidebar styles /% ============================================== %/
 margin: 1em 2em 0 0;
 position: static;
 float: right;

#sidebar a,
#sidebar a:hover{
 border: 0;

#sidebar h1{
 font-size: 1.4em;
 font-weight: bold;
 margin: 0;
 background: transparent;
 color: #000;

#sidebar ul{
 padding: 0;
 margin: 0 0 0 1em;

#sidebar li{
 list-style: none;

#sidebar li:before{
 color: #000;
 content: "\00BB \0020";

#sidebar, #mainMenu, #sideBarOptions{
 width: 200px;
 text-align: left;

 position: static;
!Sidebar search styles /% ======================================== %/
 margin: 0 0 0 10px;
 width: 145px;

#sidebarSearch input{
 font-size: .9em;
 width: 100px;

#sidebarSearch .button{
 float: right;
 margin-top: 1px;
!Sidebar option styles
 margin-left: .75em;

#sidebarOptions h1{
 font-size: 1.3em;

#sidebarOptions a{
 display: inline;
 border: 0;

#sidebarOptions .sliderPanel{
 background-color: transparent;
 font-size: 1em;
 margin: 0;

#sidebarOptions .sliderPanel a:before,
#sidebarTabs li:before{
 content: "";
!Sidebar tab styles

#sidebarTabs .tab,
#sidebarTabs .tab:hover{
 border: 1px solid #ccc;
 text-decoration: none;

#sidebarTabs .tabSelected{
 background: #ccc;
 color: #333;

#sidebarTabs .tabUnselected{
 background: #e6e6e6;
 color: #333;

#sidebarTabs .tabContents{
 background: #ccc;
 color: #333;
 border: 1px solid #ccc;
 width: 95%;

#sidebarTabs .tabContents a{
 color: #06c;

#sidebarTabs .tabContents a:hover{
 color: #147;


#sidebarTabs a.tabSelected:hover{
 cursor: default;

#sidebarTabs .txtMoreTab .tab{
 border: 1px solid #aaa;
 color: #333;

#sidebarTabs .txtMoreTab .tabSelected{
 background: #aaa;
 color: #333;

#sidebarTabs .txtMoreTab .tabSelected:hover{
 background: #aaa;
 color: #333

#sidebarTabs .txtMoreTab .tabUnselected{
 background: #ccc;
 color: #333;

#contentWrapper #sidebar .txtMoreTab .tabUnselected:hover,#contentWrapper #displayArea .txtMoreTab .tabUnselected:hover{
 color: #333;

#contentWrapper .txtMoreTab .tabContents{
 background: #aaa;
 color: #333;
 border: 1px solid #aaa;
!Message area styles /% ========================================== %/
#messageArea {
background-color: #eee;
 border: 1px solid #ccc;
 color: #bbb;
 margin: 0 1em;
 font-size: .8em;
 line-height: 2;

#messageArea a:link{
 color: #aaa;
#messageArea a:hover{
 color: #06c;

#messageArea .messageToolbar .button{
 border: 1px solid #ccc;
 color: #aaa;
 text-decoration: none;
#messageArea .messageToolbar .button:hover{
 border: 1px solid #777;
 color: #777;
!Popup styles /% ================================================ %/
 padding: 0;
 background: #eee;
 border: 1px solid #ccc;
 color: #333;

#popup a{
 color: #06c;
 font-weight: normal;

#popup a:hover{
 color: #fff;
 background: #aaa;
 text-decoration: none;
!Tiddler display styles /% ====================================== %/
 margin: 1em 19em 1em 1.7em;
 /*text-align: justify;*/
 font: 1.5em/2 'Trebuchet MS', 'Lucida Grande', Verdana, Arial, Sans-Serif;

h1, h2, h3, h4, h5, .title{
    font-family: 'Trebuchet MS', 'Lucida Grande', Verdana, Arial, Sans-Serif;
    line-height: 1.5;
color: #333;
padding: 0;

.viewer h1,.viewer h2,.viewer h3,.viewer h4,.viewer h5,.viewer h6{
 background: transparent;
 border-bottom: 1px dotted #ccc;
    line-height: 1.5;

 font-size: 1.6em; 

 color: #777;
 font-size: .9em;

 font-size: .8em;

.toolbar a:link,.toolbar a:visited{
 background: #e6e6e6;
 border: 1px solid #ccc;
 color: #aaa;
 padding: 1px 3px;
 margin: 0 .5em 0 0;

.toolbar a.button:hover{
 background: #ccc;
 border-color: #bbb;
 color: #06c;
 text-decoration: none;

.viewer a.tiddlyLinkNonExisting:link{
 color: #b85b5a;
 font-style: normal;

.viewer a.tiddlyLinkNonExisting:hover{
 text-decoration: underline; 

.viewer a.tiddlyLinkExisting:link,#displayArea .viewer a.externalLink{
 font-weight: normal;
 color: #06c;

.viewer a.tiddlyLinkExisting:hover,.viewer a.externalLink:hover{
 color: #147;
 text-decoration: underline; 

.viewer .button{
 border: 0;

.editor {
 font-size: 8pt;
 color: #402c74;
 font-weight: normal;
    line-height: 1.5;

.editor input, .editor textarea {
 display: block;
 font: 1.5em/1.5 "DejaVu Sans Mono", "Lucida Console", "Courier New", "Andale Mono", "Monaco", monospace;
 margin: 0 0 10px 0;
 border: 1px inset #333;
 padding: 2px 0;

.footer, .footer a.button,.editorFooter, .footer a.button{
 color: #aaa;

.selected .footer,.selected .footer a{
 color: #777;

.selected .footer a.button,.selected .editorFooter a.button{
 color: #06c;

.footer a.button:hover,.editorFooter a.button:hover{
 color: #147;
 background: transparent;

 clear: none; 
Huw Williams, Jaguar Research

Sunroof boom is a low frequency problem (60-80 Hz), which occurs when a car is being driven with its sliding sunroof open. The problem is due to the resonance of the air in the passenger compartment being excited by vortices which are shed from the front edge of the aperture. The occurence depends on the forward velocity of the car, the length of the aperture and the longitudinal position of the aperture relative to the top of the windscreen. A pop-up deflector helps, particularly when it is castellated but it is not a complete solution.

When we are designing new models, we would like to be able to decide the optimum length and position of the sunroof, keeping the boom at acceptable levels and making the speed of onset as high as possible, without over restricting the length of the sliding roof. We need to achieve this long before we build a sufficiently representative car that we can track test or wind tunnel test. 
<<timeline better:true maxDays:3>>
|''Version:''|1.0.1 (2006-06-01)|
|''Description:''|Provides a drop down listing current tiddler tags, and allowing toggling of tags.|
|''Source Code:''|[[TaggerPluginSource]]|
|''~TiddlyWiki:''|Version 2.0.8 or better|
// /%
config.tagger={defaults:{label:"Tags: ",tooltip:"Manage tiddler tags",taglist:"true",excludeTags:"",notags:"tiddler has no tags",aretags:"current tiddler tags:",toggletext:"add tags:"}};config.macros.tagger={};config.macros.tagger.arrow=(document.all?"▼":"▾");config.macros.tagger.handler=function(_1,_2,_3,_4,_5,_6){var _7=config.tagger.defaults;var _8=_5.parseParams("tagman",null,true);var _9=((_8[0].label)&&(_8[0].label[0])!=".")?_8[0].label[0]+this.arrow:_7.label+this.arrow;var _a=((_8[0].tooltip)&&(_8[0].tooltip[0])!=".")?_8[0].tooltip[0]:_7.tooltip;var _b=((_8[0].taglist)&&(_8[0].taglist[0])!=".")?_8[0].taglist[0]:_7.taglist;var _c=((_8[0].exclude)&&(_8[0].exclude[0])!=".")?(_8[0].exclude[0]).readBracketedList():_7.excludeTags.readBracketedList();if((_8[0].source)&&(_8[0].source[0])!="."){var _d=_8[0].source[0];}if(_d&&!store.getTiddler(_d)){return false;}var _e=function(e){if(!e){var e=window.event;}var _11=Popup.create(this);var _12=store.getTags();var _13=new Array();for(var i=0;i<_12.length;i++){_13.push(_12[i][0]);}if(_d){var _15=store.getTiddler(_d);_13=_15.tags.sort();}var _16=_6.tags.sort();var _17=function(_18,_19,_1a){var sp=createTiddlyElement(createTiddlyElement(_11,"li"),"span",null,"tagger");var _1c=createTiddlyButton(sp,_18,_1a+" '"+_19+"'",taggerOnToggle,"button","toggleButton");_1c.setAttribute("tiddler",_6.title);_1c.setAttribute("tag",_19);insertSpacer(sp);if(window.createTagButton_orig_mptw){createTagButton_orig_mptw(sp,_19)}else{createTagButton(sp,_19);}};createTiddlyElement(_11,"li",null,"listTitle",(_6.tags.length==0?_7.notags:_7.aretags));for(var t=0;t<_16.length;t++){_17("[x]",_16[t],"remove tag ");}createTiddlyElement(createTiddlyElement(_11,"li"),"hr");if(_b!="false"){createTiddlyElement(_11,"li",null,"listTitle",_7.toggletext);for(var i=0;i<_13.length;i++){if(!_6.tags.contains(_13[i])&&!_c.contains(_13[i])){_17("[ ]",_13[i],"add tag ");}}createTiddlyElement(createTiddlyElement(_11,"li"),"hr");}var _1f=createTiddlyButton(createTiddlyElement(_11,"li"),("Create new tag"),null,taggerOnToggle);_1f.setAttribute("tiddler",_6.title);if(_d){_1f.setAttribute("source",_d);}Popup.show(_11,false);e.cancelBubble=true;if(e.stopPropagation){e.stopPropagation();}return (false);};createTiddlyButton(_1,_9,_a,_e,"button","taggerDrpBtn");};window.taggerOnToggle=function(e){var tag=this.getAttribute("tag");var _22=this.getAttribute("tiddler");var _23=store.getTiddler(_22);if(!tag){var _24=prompt("Enter new tag:","");if(_24!=""&&_24!=null){var tag=_24;if(this.getAttribute("source")){var _26=store.getTiddler(this.getAttribute("source"));_26.tags.pushUnique(_24);}}else{return false;}}if(!_23||!_23.tags){store.saveTiddler(_22,_22,"",config.options.txtUserName,new Date(),tag);}else{if(_23.tags.find(tag)==null){_23.tags.push(tag);}else{if(!_24){_23.tags.splice(_23.tags.find(tag),1);}}store.saveTiddler(_23.title,_23.title,_23.text,_23.modifier,_23.modified,_23.tags);}story.refreshTiddler(_22,null,true);if(config.options.chkAutoSave){saveChanges();}return false;};setStylesheet(".tagger a.button {font-weight: bold;display:inline; padding:0px;}\n"+".tagger #toggleButton {padding-left:2px; padding-right:2px; margin-right:1px; font-size:110%;}\n"+"#nestedtagger {background:#2E5ADF; border: 1px solid #0331BF;}\n"+".popup .listTitle {color:#000;}\n"+"","TaggerStyles");window.lewcidTiddlerSwapTag=function(_27,_28,_29){for(var i=0;i<_27.tags.length;i++){if(_27.tags[i]==_28){_27.tags[i]=_29;return true;}}return false;};window.lewcidRenameTag=function(e){var tag=this.getAttribute("tag");var _2d=prompt("Rename tag '"+tag+"' to:",tag);if((_2d==tag)||(_2d==null)){return false;}if(store.tiddlerExists(_2d)){if(confirm(config.messages.overwriteWarning.format([_2d.toString()]))){story.closeTiddler(_2d,false,false);}else{return null;}}tagged=store.getTaggedTiddlers(tag);if(tagged.length!=0){for(var j=0;j<tagged.length;j++){lewcidTiddlerSwapTag(tagged[j],tag,_2d);}}if(store.tiddlerExists(tag)){store.saveTiddler(tag,_2d);}if(document.getElementById("tiddler"+tag)){var _2f=document.getElementById(story.idPrefix+tag);var _30=story.positionTiddler(_2f);var _31=document.getElementById(story.container);story.closeTiddler(tag,false,false);story.createTiddler(_31,_30,_2d,null);story.saveTiddler(_2d);}if(config.options.chkAutoSave){saveChanges();}return false;};window.onClickTag=function(e){if(!e){var e=window.event;}var _34=resolveTarget(e);var _35=(!isNested(_34));if((Popup.stack.length>1)&&(_35==true)){Popup.removeFrom(1);}else{if(Popup.stack.length>0&&_35==false){Popup.removeFrom(0);}}var _36=(_35==false)?"popup":"nestedtagger";var _37=createTiddlyElement(document.body,"ol",_36,"popup",null);Popup.stack.push({root:this,popup:_37});var tag=this.getAttribute("tag");var _39=this.getAttribute("tiddler");if(_37&&tag){var _3a=store.getTaggedTiddlers(tag);var _3b=[];var li,r;for(r=0;r<_3a.length;r++){if(_3a[r].title!=_39){_3b.push(_3a[r].title);}}var _3d=config.views.wikified.tag;if(_3b.length>0){var _3e=createTiddlyButton(createTiddlyElement(_37,"li"),_3d.openAllText.format([tag]),_3d.openAllTooltip,onClickTagOpenAll);_3e.setAttribute("tag",tag);createTiddlyElement(createTiddlyElement(_37,"li"),"hr");for(r=0;r<_3b.length;r++){createTiddlyLink(createTiddlyElement(_37,"li"),_3b[r],true);}}else{createTiddlyText(createTiddlyElement(_37,"li",null,"disabled"),_3d.popupNone.format([tag]));}createTiddlyElement(createTiddlyElement(_37,"li"),"hr");var h=createTiddlyLink(createTiddlyElement(_37,"li"),tag,false);createTiddlyText(h,_3d.openTag.format([tag]));createTiddlyElement(createTiddlyElement(_37,"li"),"hr");var _40=createTiddlyButton(createTiddlyElement(_37,"li"),("Rename tag '"+tag+"'"),null,lewcidRenameTag);_40.setAttribute("tag",tag);}Popup.show(_37,false);e.cancelBubble=true;if(e.stopPropagation){e.stopPropagation();}return (false);};if(!window.isNested){window.isNested=function(e){while(e!=null){var _42=document.getElementById("contentWrapper");if(_42==e){return true;}e=e.parentNode;}return false;};}config.shadowTiddlers.TaggerPluginDocumentation="The documentation is available [[here.|http://tw.lewcid.org/#TaggerPluginDocumentation]]";config.shadowTiddlers.TaggerPluginSource="The uncompressed source code is available [[here.|http://tw.lewcid.org/#TaggerPluginSource]]";
// %/
''If you want this documentation available offline, copy this tiddler to your TW.''

The tagger plugin is a result of combining key features from the dropTags and tagAdder macro's. However, since it departs somewhat from the interface tagAdder users will be familiar with, I'm making this available as a new plugin alongside tagAdder.

Tagger provides a dropdown list of the current tiddler tags, along with the ability to toggle them. Further it can optionally display a list of tags in the dropdown, which can be addded to the tiddler.

*Clicking on ''[x]'' and ''[ ]'' removes and adds the tag respectively.
*Clicking on the tag text displays the tag dropdown for that tag, listing tiddlers tagged with it.
*The ''Create new tag'' lets you quickly type in a new tag not in the list.
*Click on this button to see the dropdown: <<tagger>>

Further note that each tag dropdown has a ''Rename tag'' option. This can be used to quickly rename a tag in the entire TW, also rename it's tiddler if it exists.

//''tagAdder, dropTags and the future''
- tagAdder will no longer will be developed, but will remain available. I encourage all tagAdder users to upgrade to tagger.
- dropTags will still be developed for those users that dont want the 'tag editing' features.//

!Examples & Usage:
*At it's simplest, using tagger is as simple as {{{<<tagger>>}}} <<tagger>>
**This gives a dropdown with the current tiddler tags, followed by all the tags in the TW.
*You can also use a list of specified tags instead of all tags in the TW, by specifying a source tiddler.
**{{{<<tagger source:TagsDB>>}}} <<tagger source:TagDataBase>>
*You can also display ONLY the current tiddler tags
**{{{<<tagger taglist:false>>}}} <<tagger taglist:false>>

*To exclude tags from the list: {{{<<tagger exclude:"excludeLists Tag2 [[Tag with spaces]]">>}}} <<tagger exclude:"excludeLists Tag2 [[Tag with spaces]]">>

*For a custom button label: {{{<<tagger label:"custom label">>}}} <<tagger label:"custom label">>
*For a custom tooltip: {{{<<tagger tooltip:"custom tooltip">>}}} <<tagger tooltip:"custom tooltip">>

!CSS and Styling:
For those wishing to customize the popup appearance:
*the main popup has a class and id of popup has with all other popups.
*the nested tag popups have an id of nestedpopup

!Advanced Users:
You can change the global defaults for tagger, like the button label, the tags to exclude or whether to display the taglist or not, by editing the ''config.tagger.defaults'' section in the code.

!To Do:
*code optimization
*possibly a 'delete this tag' option.

*version 1.0.1 (2006-06-01): fixed conflicts with QuickOpenTag (TagglyTagging) and AutoTagger.
The distribution of heat and moisture in sugar stored in silos influence:
* The chemical and microbial stability of the sugar.
* The ease of silo emptying due to moist sugar, sugar caking and hardening.
* The risk of dust explosions.
The above, are important factors in sugar quality and production. In order to develop better silo operation strategies and silo designs, Danisco Sugar would like to develop a model that, with some accuracy, can predict the temperature and moisture gradients in bulk sugar.
!!The silos
Danisco Sugar utilises several types of silos for the storage of crystalline sugar. They differ in:
* The use of construction material (e.g. concrete or plate silos).
* The construction (e.g. volume, height, diameter, with or without a central tower and isolation etc.).
* The method of filling and emptying (e.g. first in - last out or first in - first out).
* Sugar conditioning functionality (e.g. temperature control of walls, control of the condition of the air space above sugar, ability to blow conditioned air through the bulk sugar etc.). 

The silos are filled during the beet-campaign (Sept.-Dec.), the sugar is withdrawn in a regular, but not constant, basis and the silos are emptied before next years campaign.
!!Temperature and moisture migration
Moisture gradients are created due to moisture migration in the bulk sugar by a variety of modes:
* Diffusion due to temperature gradients.
* Diffusion due to bulk sugar quality heterogeneity .
* Natural convection due to temperature gradients.
* Forced convection due to bulk sugar conditioning (blow-through).
* Introduction and withdrawal of moisture through silo walls (e.g. concrete silos) and due to the conditioning of the air above the bulk sugar. 

Temperature gradients are induced by:
* The temperature of the silo walls, top and bottom due to seasonal variations in temperature.
* The temperature of the air space above the bulk sugar due to seasonal variations or conditioning.
* The heat of bulk sugar conditioning air (blow-through) Heterogeneity of the of the bulk phase can be created due to de-mixing processes when filling and emptying silos and due to differing sugar qualities when filling. Different sugar qualities have different physical properties. The following physical properties are assumed of importance when formulating models to describe the formation of temperature and moisture gradients: bulk sugar heat capacity, heat conductivity, density, porosity, tortousity and sorption isotherms. Most will vary with the sugar quality (that is: crystal size, reducing sugar content and ash content), some with temperature and moisture. 

!!The models
The models should be able to predict temperature and moisture gradients in sugar silos by:
* Diffusion.
* Natural and forced convection, due to:
** Seasonal variations in outside humidity and temperature.
** Silo dimensions and construction.
** Silo conditioning.
** Sugar quality.
** The size of the bulk phase. 

The basic model could have the following features:
* Mass transfer by diffusion as well as convection in bulk sugar.
* Energy transfer by conduction as well as convection in bulk sugar.
* Energy and mass transfer through bulk sugar boundaries.
* Homogenous bulk phase (one quality) of simple geometry (cylindrical).
* Moving bulk - air boundary.
* Varying boundary conditions of inner as well as outer walls, bottom and air. 

A more advanced model could have the following additional features:
* Heterogeneous bulk phase.
* Complex geometry.
* Differing boundary conditions at north an south faced walls. 

A numerical model is sought that can be solved on a pentium PC if written in e.g. Turbo Pascal or Fortran. Models developed for corn silos can be used as an outset, e.g:

Tanaka H, Yoshida K (1984) "Heat and mass transfer mechanisms in a grain storage silo". Engineering Sciences in the food industry. Elsevier Applied Sience Publishers, Essex, England. pp 89-98

Khankari KK, Morey RV, Patankar SV (1994). "Mathematical model for moisture diffusion in stored grain due to temperature gradients". Transactions of the ASAE 37: 1591-1604) 
{{c{A simple experiment of the filament formation of an ink-jet printed drople, here the filament formation is modest.}}}

Light-emitting polymer displays are a new, interesting flat display principle. The active material in the display is a  very thin semi-conducting polymer layer of order 100 nanometer. To obtain these thin layers a small concentration of the polymer is dissolved in a suitable solvent. Different colours can be obtained with different polymers. To make a full colour display a red, green and blue polymer solution have to be applied in pixels of typically 66*200 micron. The method with which the polymer solutions are applied is by means of ink-jet printing. Individual droplets are printed in
the pixels and  by evaporating the solvent the final polymer layer is obtained. The polymers that are used have a high molecular weight which causes the droplet formation to be quite different from an ordinary Newtonian liquid, i.e. a long filament during droplet formation can be formed. This can give rise to a decrease in the placement accuracy of the droplets on the substrate. To predict the behaviour of a droplet in an ink-jet printer the material parameters of the liquid are very important. For a Newtonian liquid the shear viscosity is a sufficient parameter. For the liquids with small concentrations of a high molecular polymer this is quite different. The question is: can the problem be solved vice versa, in other words, can we obtain material parameters from the droplet formation process out of an ink-jet nozzle? To begin with, it is important to have a mathematical model of the droplet formation.

!The problem formulation in more detail
The viscosity of an ink is an important parameter for the droplet formation in an ink-jet head. Standard in the ink-jet printing world is to measure the shear viscosity of the liquid. Most common inks are Newtonian liquids and therefore the measurements of the shear viscosity is  a suitable characterization technique. The inks that Philips wants to use are solutions of a high molecular weight polymer in  small concentrations in a suitable solvent. The drop formation of these solutions is considerably different from a pure Newtonian liquid. There is more energy needed to eject the droplet and also the droplets are formed with a filament. When this filament is too long it can break up in satellite droplets and also the directional accuracy can decrease.

When these solutions are measured in a shear rheometer the viscosity is constant as a function of the shear rate, and in the proper regime for ink-jet printing. The droplet formation, however, shows this filament formation. The suggestion is that the filament formation during the ink-jet printing is caused by the elongational viscosity. It is well known that a small concentration of a high molecular weight polymer in a solvent can have a substantially larger elongational viscosity than a pure Newtonian liquid.

{{c{An extreme example of filament formation.}}}

The problem is that the shear viscosity cannot be used to characterize the `inks'. The elongational viscosity is not easy to measure, in contrast to the shear viscosity. It is also important to measure the elongational viscosity at the proper rate of deformation of the liquid. At this moment researchers at Philips use the length of the tail that is formed during the droplet formation as a way to characterize our liquids. With an emperical relation they can transform this into a maximum shear viscosity at a typical concentration. At this moment this is a suitable way to characterize the liquids.

Philips is interested if the droplet formation could be used as a simple way to measure the elongational viscosity of the inks and compare it with an other possible measurement technique, which is not available at their laboratory.

* Can we obtain information on the velocity and the deformation from the droplet and filament formation?
* Can we obtain information on the elongational viscosity from the drop formation? At Philips, they already have a very simple model for this.
* Verify the model with experiments.
* Do we need a special visco-elastic model for the calculations?
Micro-fluidic devices consist of etched or printed channels and nodes in thin wafers through which fluids are pumped to mix, react and structure (e.g. form drops). Applications of these range from sensors, chemical analysis, fast throughput screening, fast throughput chemistry and manufacturing. In manufacturing applications it would be desirable to have many such devices coupled to feed-streams and giving a product output. In the particular aspect of manufacturing we would like to consider is two feed-streams oil and water and the function of the devices is to produce drops of controlled size of one phase in another. Some channels would be pure water, others pure oil and others slugs of oil in water switching from one to other would occur at drop generation junctions. The study group may also wish to formulate some of the other applications.

The devices are a network of channels, junctions and nodes. The problem is to understand the steady-state and dynamic response of such fluidic networks. We have observed instabilities in the flow pattern where the flow of components ceases in some channels, dominates in others and the overall flow pattern breaks symmetries of the network. Also channels may foul and block, and this need occur with out catastrophic consequences for the function of the rest of the network. We wish to establish design principles for robust networks and to have a mathematical tool-kit to analyse proposed designs. The following questions/issues occur to us:
* Is there a classification on general grounds of the possible instabilities?
* What is the sensitivity of the instabilities to the tolerances of the chip manufacture (e.g. channel width, junction geometry); turning this around to what tolerances do devices need be made to limit instabilities?
* Are there general design principles (e.g. feedback loops, re- routing, diodes) etc that will give robustness for given tolerance and robust against blocking?
* Where to place monitors, would there be any signatures of the onset of problems?

There are a hierarchy of physical and mathematical issues.
* For the steady states, the generalisation and analysis of Kirchhoffs laws of mass conservation to multi-component flow of conserved components and immiscible components in complex networks of channels and nodes (although we imagine flow in channels involving continuum of one phase and slugs of the other, we anticipate a continuum formulation based on volume fractions).
* General theorems of such network equations, in particular stability.
* The formulation of physical inspired suitable (again continuum) dynamic equations based on conservation of mass and momentum that will allow prediction of dynamic response.
* Statistical generalisations of the above including variations in channel geometry, widths etc and giving relationships between tolerance and instability.
* Discrete formulations, simulations of discrete slugs travelling through such networks.

We imagine there are analogous problems in traffic flow, blood flow and chemical plant design, flow in porous media. In addition at the finer scale there are a host of more traditional fluid-mechanics and physics problems: 3D modelling of multiphase flow in confined geometry especially a T-junctions and converging nozzles; dynamic contact angles; dynamics of three phase contact lines; inclusion of interface dynamic with Marangoni effects; wall boundary conditions; electrophoresis in confined geometries; phase changes during flow. Much of these however become heavy finite-element work or questions of physics input. We would prefer emphasis on the larger scale problems, but do not mind if some want to stray/advise on these latter areas.

Further material will be available at the Study Group. 
At Teijin Twaron in Arnhem new ways of producing fibres are being developed.
One of the interesting new techniques is "The Rotor Spinning Process".

In principle, this process looks a lot like the making of sugarfloss (or cotton candy) at the carnival / fair. Here, however, we deal with a polymer-filled disc with tiny holes. The polymer is pressed, due to the centrifugal forces, through the holes to the outside. The process is already in operation at the company; At Teijin Twaron there is  also a pilot machine in which variations in process and geometry can be tested.

The liquid polymer solidifies and becomes a thin filament on the exterior boundary of the machine. The purpose of the work during the week "Math with  Industry" with the mathematicians is to verify an existing model on the basis  of a momentum equation and mass balances and if possible to improve the model.

A first order approximation of the path the filament makes (without modeling air friction) in the space between disc and exterior boundary of the machine exists already. Also a description of the path in which water cooling and air friction is available. However, the model can be improved; certain states of the rotor spin process should be approximated in a better way.

''The purpose of the modelling in some more detail reads:''
# Try to determine the situation (process and geometry) in which a continuous filament can be generated. Breaking of filaments may cause problems in the use of the material if the length of the filament is below a critical length.
# Try to determine the circumstances (process and geometry) in which the length of a broken filament can be determined beforehand. In this case, fibres can, in principle, be produced.
# Determine the effect of the temperature, rotor speed, etc. in the present operating situation in order to achieve a robust production process.
|''Description:''|A bar to switch between tiddlers through tabs (like browser tabs bar).|
|''Date:''|Jan 18,2008|
|''Author:''|Pascal Collin|
|''License:''|[[BSD open source license|License]]|
|''Browser:''|Firefox 2.0; InternetExplorer 6.0, others|
On [[homepage|http://visualtw.ouvaton.org/VisualTW.html]], open several tiddlers to use the tabs bar.
#import this tiddler from [[homepage|http://visualtw.ouvaton.org/VisualTW.html]] (tagged as systemConfig)
#save and reload
#''if you're using a custom [[PageTemplate]]'', add {{{<div id='tiddlersBar' refresh='none' ondblclick='config.macros.tiddlersBar.onTiddlersBarAction(event)'></div>}}} before {{{<div id='tiddlerDisplay'></div>}}}
#optionally, adjust StyleSheetTiddlersBar
*Doubleclick on the tiddlers bar (where there is no tab) create a new tiddler.
*Tabs include a button to close {{{x}}} or save {{{!}}} their tiddler.
*By default, click on the current tab close all others tiddlers.
!Configuration options 
<<option chkDisableTabsBar>> Disable the tabs bar (to print, by example).
<<option chkHideTabsBarWhenSingleTab >> Automatically hide the tabs bar when only one tiddler is displayed. 
<<option txtSelectedTiddlerTabButton>> ''selected'' tab command button.
<<option txtPreviousTabKey>> previous tab access key.
<<option txtNextTabKey>> next tab access key.
config.options.chkDisableTabsBar = config.options.chkDisableTabsBar ? config.options.chkDisableTabsBar : false;
config.options.chkHideTabsBarWhenSingleTab  = config.options.chkHideTabsBarWhenSingleTab  ? config.options.chkHideTabsBarWhenSingleTab  : false;
config.options.txtSelectedTiddlerTabButton = config.options.txtSelectedTiddlerTabButton ? config.options.txtSelectedTiddlerTabButton : "closeOthers";
config.options.txtPreviousTabKey = config.options.txtPreviousTabKey ? config.options.txtPreviousTabKey : "";
config.options.txtNextTabKey = config.options.txtNextTabKey ? config.options.txtNextTabKey : "";
config.macros.tiddlersBar = {
	tooltip : "see ",
	tooltipClose : "click here to close this tab",
	tooltipSave : "click here to save this tab",
	promptRename : "Enter tiddler new name",
	currentTiddler : "",
	previousState : false,
	previousKey : config.options.txtPreviousTabKey,
	nextKey : config.options.txtNextTabKey,	
	tabsAnimationSource : null, //use document.getElementById("tiddlerDisplay") if you need animation on tab switching.
	handler: function(place,macroName,params) {
		var previous = null;
		if (config.macros.tiddlersBar.isShown())
				if (title==config.macros.tiddlersBar.currentTiddler){
					var d = createTiddlyElement(null,"span",null,"tab tabSelected");
					if (previous && config.macros.tiddlersBar.previousKey) previous.setAttribute("accessKey",config.macros.tiddlersBar.nextKey);
					previous = "active";
				else {
					var d = createTiddlyElement(place,"span",null,"tab tabUnselected");
					var btn = createTiddlyButton(d,title,config.macros.tiddlersBar.tooltip + title,config.macros.tiddlersBar.onSelectTab);
					btn.setAttribute("tiddler", title);
					if (previous=="active" && config.macros.tiddlersBar.nextKey) btn.setAttribute("accessKey",config.macros.tiddlersBar.previousKey);
				var isDirty =story.isDirty(title);
				var c = createTiddlyButton(d,isDirty ?"!":"x",isDirty?config.macros.tiddlersBar.tooltipSave:config.macros.tiddlersBar.tooltipClose, isDirty ? config.macros.tiddlersBar.onTabSave : config.macros.tiddlersBar.onTabClose,"tabButton");
				c.setAttribute("tiddler", title);
				if (place.childNodes) {
					place.insertBefore(document.createTextNode(" "),place.firstChild); // to allow break line here when many tiddlers are open
				else place.appendChild(d);
	refresh: function(place,params){
		if (config.macros.tiddlersBar.previousState!=config.macros.tiddlersBar.isShown()) {
			if (config.macros.tiddlersBar.previousState) story.forEachTiddler(function(t,e){e.style.display="";});
			config.macros.tiddlersBar.previousState = !config.macros.tiddlersBar.previousState;
	isShown : function(){
		if (config.options.chkDisableTabsBar) return false;
		if (!config.options.chkHideTabsBarWhenSingleTab) return true;
		var cpt=0;
		return (cpt>1);
	selectNextTab : function(){  //used when the current tab is closed (to select another tab)
		var previous="";
			if (!config.macros.tiddlersBar.currentTiddler) {
			if (title==config.macros.tiddlersBar.currentTiddler) {
				if (previous) {
				else config.macros.tiddlersBar.currentTiddler=""; 	// so next tab will be selected
			else previous=title;
	onSelectTab : function(e){
		var t = this.getAttribute("tiddler");
		if (t) story.displayTiddler(null,t);
		return false;
	onTabClose : function(e){
		var t = this.getAttribute("tiddler");
		if (t) {
			if(story.hasChanges(t) && !readOnly) {
				return false;
		return false;
	onTabSave : function(e) {
		var t = this.getAttribute("tiddler");
		if (!e) e=window.event;
		if (t) config.commands.saveTiddler.handler(e,null,t);
		return false;
	onSelectedTabButtonClick : function(event,src,title) {
		var t = this.getAttribute("tiddler");
		if (!event) event=window.event;
		if (t && config.options.txtSelectedTiddlerTabButton && config.commands[config.options.txtSelectedTiddlerTabButton])
			config.commands[config.options.txtSelectedTiddlerTabButton].handler(event, src, t);
		return false;
	onTiddlersBarAction: function(event) {
		var source = event.target ? event.target.id : event.srcElement.id; // FF uses target and IE uses srcElement;
		if (source=="tiddlersBar") story.displayTiddler(null,'New Tiddler',DEFAULT_EDIT_TEMPLATE,false,null,null);
	createActiveTabButton : function(place,title) {
		if (config.options.txtSelectedTiddlerTabButton && config.commands[config.options.txtSelectedTiddlerTabButton]) {
			var btn = createTiddlyButton(place, title, config.commands[config.options.txtSelectedTiddlerTabButton].tooltip ,config.macros.tiddlersBar.onSelectedTabButtonClick);
			btn.setAttribute("tiddler", title);

story.coreCloseTiddler = story.coreCloseTiddler? story.coreCloseTiddler : story.closeTiddler;
story.coreDisplayTiddler = story.coreDisplayTiddler ? story.coreDisplayTiddler : story.displayTiddler;

story.closeTiddler = function(title,animate,unused) {
	if (title==config.macros.tiddlersBar.currentTiddler)
	story.coreCloseTiddler(title,false,unused); //disable animation to get it closed before calling tiddlersBar.refresh
	var e=document.getElementById("tiddlersBar");
	if (e) config.macros.tiddlersBar.refresh(e,null);

story.displayTiddler = function(srcElement,tiddler,template,animate,unused,customFields,toggle){
	var title = (tiddler instanceof Tiddler)? tiddler.title : tiddler;  
	if (config.macros.tiddlersBar.isShown()) {
			if (t!=title) e.style.display="none";
			else e.style.display="";
	var e=document.getElementById("tiddlersBar");
	if (e) config.macros.tiddlersBar.refresh(e,null);

var coreRefreshPageTemplate = coreRefreshPageTemplate ? coreRefreshPageTemplate : refreshPageTemplate;
refreshPageTemplate = function(title) {
	if (config.macros.tiddlersBar) config.macros.tiddlersBar.refresh(document.getElementById("tiddlersBar"));

ensureVisible=function (e) {return 0} //disable bottom scrolling (not useful now)

config.shadowTiddlers.StyleSheetTiddlersBar = "/*{{{*/\n";
config.shadowTiddlers.StyleSheetTiddlersBar += "#tiddlersBar .button {border:0}\n";
config.shadowTiddlers.StyleSheetTiddlersBar += "#tiddlersBar .tab {white-space:nowrap}\n";
config.shadowTiddlers.StyleSheetTiddlersBar += "#tiddlersBar {padding : 1em 0.5em 2px 0.5em}\n";
config.shadowTiddlers.StyleSheetTiddlersBar += ".tabUnselected .tabButton, .tabSelected .tabButton {padding : 0 2px 0 2px; margin: 0 0 0 4px;}\n";
config.shadowTiddlers.StyleSheetTiddlersBar += ".tiddler, .tabContents {border:1px [[ColorPalette::TertiaryPale]] solid;}\n";
config.shadowTiddlers.StyleSheetTiddlersBar +="/*}}}*/";
store.addNotification("StyleSheetTiddlersBar", refreshStyles);

config.refreshers.none = function(){return true;}
config.shadowTiddlers.PageTemplate=config.shadowTiddlers.PageTemplate.replace(/<div id='tiddlerDisplay'><\/div>/m,"<div id='tiddlersBar' refresh='none' ondblclick='config.macros.tiddlersBar.onTiddlersBarAction(event)'></div>\n<div id='tiddlerDisplay'></div>");

RichText editor.

Look at http://tinymce.moxiecode.com to know more.
Danfoss Flow Division: 	

An ultrasonic flow meter determines the flow rate by measuring the transit time of an ultrasonic pulse travelling downstream and upstream in a duct carrying a fluid. The velocity is calculated by $V = \frac{K\cdot (T2-T1)}{T1\cdot T2}$, in some applications the difference between the upstream transmit time and the downstream transmit time $(T2-T1)$ can be as low as 2ns. Hence, an accuracy of 1% leads to a maximal systematic error on the estimate of $T2-T1$ of 20 ps! The base frequency of the burst is typically 1MHz. The sound burst is traditionally generated and received by piezoelectric transducers.

An essential part of all ultrasonic flow meters is the circuit or algorithm that determines the time of arrival of the ultrasonic pulse. A typical signal can be seen here:


Over time the shape of the signal can change and the signal upstream and downstream can be slightly different due to contamination, temperature, flow rate, noise, etc. Generally speaking, it is not possible to use the first period of the received signal as the sole indicator of the arrival due to noise.

In the past Danfoss has used several techniques, the most recent will be explained in the final notes.

Ideas for algorithms can be evaluated by simulation on sampled real world signals.

[[The problem statement in PDF|p/esgi/47/project2003_flow.pdf]]
Tukkikuorman pakkautumisesta johtuvien rasitusten laskenta, Timberjack Oy

Metsäkoneiden ajonopeudet ja kuormien suuruudet ovat kasvaneet yhä suuremmiksi kasvaneiden tehokkuusvaatimusten takia. Lisäksi rengasteknologian kehitys ja erilaiset vaimentimet ovat parantaneet kuljettajan ajomukavuutta niin, että suuria nopeuksia saatetaan käyttää jopa hyvinkin hankalassa maastossa. Tästä on väistämättä seurauksena entistä suuremmat rasitukset rakenteissa. Tällaisia ovat esimerkiksi tukkikuorman pakkautumisesta aiheutuvat rasitukset. Lastin kuljetuksen aikaiset värähtelyt saavat tukkikuorman pakkautumaan yhä tiiviimmin toisiaan ja laitoja eli pankkoja vasten. Pankkojen taipumat ovat jopa selvästi silmälläkin nähtävissä, sillä kuormittamaton konjakkilasin muotoinen pankko muuttuu tukkikuorman alla tavallisen juomalasin muotoiseksi. Lisäksi ajossa alustaan kohdistuu hyvinkin suuria kiihtyvyyskuormituksia ja kuorman kallistumisia, jotka aiheuttavat pankkoon jännitysvaihteluita ja ajan mittaan jopa väsymisvaurioita.

Tehtävänä olisi nyt löytää menetelmä, jolla tukkikuorman alustan annetuista värähtelyistä (kiihtyvyyskuormista) saadaan laskettua pankkoihin kohdistuvat kuormitukset. Kuormitukset siirtyvät alustasta pankkoihin kahden, molemmilla sivuilla olevien, kiinnityspisteiden kautta. Tukkeja voidaan pitää tasapaksuisina lieriöinä, jotka koskettavat kitkallisesti toisiaan ja pankkoja vasten. Pankot ovat joustavia, poikkileikkausmuodoltaan joko pyöreitä tai suorakaiteen muotoisia ja niiden muodonmuutokset pysyvät tavallisesti materiaalin lineaarisella alueella. Ongelma voidaan ratkaista, ainakin aluksi, yksinkertaistamalla se tasotapaukseksi. Todellisessa tapauksessa kuormatilaan ahdetaan 20-40 tukkia, joiden halkaisijat ovat n. 150-400 mm edustaen vaikkapa normaalijakauman 95 %:n kattavuutta. Tukkien pituudet ovat yleensä 3-6 m. Esimerkiksi neljän metrin tukki voisi olla hyvä lähtökohta. Pankkoja puolestaan on neljä kappaletta jakautuneena melko tasan kuormatilan pituuden suhteen. 
* 2000, Sep 11-15: Nottingham (UK). [[UK-MMSG 2001|UK-MMSG 2001]]
* 2001, Sep 10-14: Nottingham (UK). [[UK-MMSG 2002|UK-MMSG 2002]]
* 2002, Sep 9-13: Nottingham (UK). [[UK-MMSG 2003|UK-MMSG 2003]]
* 2004, Sep 13-17: Strathclyde (UK). [[UK-MMSG 2004|UK-MMSG 2004]]
* 2005, Sep 12-16: Oxford (UK). [[UK-MMSG 2005|UK-MMSG 2005]]
* 2006, Sep 11-15: Nottingham (UK). [[UK-MMSG 2006|UK-MMSG 2006]]
* 2007, Sep 10-14: Southampton (UK). [[UK-MMSG 2007|UK-MMSG 2007]]
* 2008, Sep 15-19: Loughborough (UK). [[UK-MMSG 2008|UK-MMSG 2008]]
* 2009, Sep 7-11: London (UK). [[UK-MMSG 2009|UK-MMSG 2009]]
* 2010, Sep 6-10: Strathclyde (Scotland). [[UK-MMSG 2010|UK-MMSG 2010]]
* 2011, Sep 5-9: Reading (UK). [[UK-MMSG 2011|UK-MMSG 2011]]
Our aim is to quantify uncertainty in flow performance prediction due to uncertainty in a reservoir description.

We are able to build a model of uncertainties in the reservoir properties, essentially the porosity and permeability field, based on core samples, well logs and seismic data. From this starting PDF (probability density function) of the reservoir properties we want to estimate the PDF of the results such as the oil production rates and oil present in various regions.

A straightforward approach to this problem is the Monte Carlo method where a big number of realizations, sampled from the reservoir properties PDF, are run with an oil reservoir fluid flow simulator and post-processed to give the final desired PDF.

Due to length of the simulation time this method cannot be really applied in real cases where only a few number of realizations can be simulated in the available time.

A number of different ideas have been pursued in order to solve this problem.

!Length scales
We first want to digress on the important role of the correlation length of our initial random field. In fact we can imagine two opposite situations one in which the correlation length is much smaller than the size of the system and one in which is it of the same order of magnitude. In the first case we can think that the effect of the heterogeneities will be averaged during one simulation and we can expect that the result will not vary sensibly from one simulation to another. In the second case the result of our simulation will depend a lot on the particular realization sampled. This conjecture could be more accurately investigated using numerical experiments.

!Numerical Methods
In order to reduce the simulation time, upscaling techniques are normally used. Various different techniques have been implemented to upscale the reservoir properties to a coarser grid, however these techniques normally rely on the results found running first a simulation on a fine grid. What we are looking for is a way of characterizing uncertainty, so we are not interested in reproducing results found on a fine grid in a coarse grid. Instead we would like to obtain, by running several simulations on a coarse grid, a final PDF of the total oil recovery that approximates the PDF we would obtain by doing simulations on a fine grid.

We wonder if a Bayesian approach could be used in order to relate a large set of coarse grid simulations to a much smaller set of simulations from finer grids.

!Analytical methods
There are also several attempts at analytical approximations to the stochastic partial differential equations of the process.

The major problem is due to the non-linearity of the equations, however very promising results have been found in the case of single-phase flow that could be used as a starting point for two or three-phase problems.

!History Matching
When we study the propagation of uncertainty through a flow simulator we must not forget the important problem that we should condition the probability distribution on observed flow history. Is there some way that this problem, too, can be tackled in a Bayesian framework involving coarse and fine grids?

Further material will be available at the Study Group.
config.options.chkHttpReadOnly = false;
Monica Spiteri
Lung Research, Directorate of Respiratory Medicine, University Hospital of North Staffordshire / Keele University ~ST4 6QG (monica.spiteri (at) uhns.nhs.uk)

!Health Condition
Pulmonary fibrosis (PF) is a devastating illness involving exaggerated lung scarring; with no efficacious therapy to modify its natural progressive clinical course. An estimated two-thirds of patients die within 2 to 4 years of diagnosis. Donated healthy lung transplants are used to replace fibrosed lungs, but need for transplants far outweighs available supply. Stem cell research currently offers tremendous promise for effective treatment of PF. However, clinical efficacy of stem cell–based strategies could be hampered by extracellular fibrotic factors driving disease processes at the target site. We //hypothesise that ~PF-related abundance of profibrogenic factors such as Connective Tissue Growth Factor (CTGF) and Transforming Growth Factor (TGF`beta1`) drives progenitor alveolar epithelial cells (AEC) away from terminal differentiation conducive to local alveolar tissue regeneration; towards effector fibroblast/myofibroblast development.// This fundamental question needs addressing; it has implications for stem cell therapy in repair of fibrosed lungs. As there are no animal models that fully capture ~PF-disease processes, information obtained from well-designed mathematical models could be critical for development of strategies that would beneficially enhance stem cell engraftment in //in vivo// implants.
Tao Sun, Sheila ~MacNeil
Kroto Research Institute, Department of Engineering Materials, Sheffield University, Broad Lane, Sheffield, S3 7HQ, UK


In normal human skin, melanocytes lie adjacent to the basal lamina, interspaced between the basal keratinocytes at regular intervals These melanocytes project dendrites up through the keratinocyte layers and interact with many keratinocytes in a tightly regulated fashion. Research has indicated that one melanocyte contacts approximately 36 keratinocytes to form the so-called epidermal melanin unit. It is thought that this is achieved by keratinocytes dictating to melanocytes by a complex array of signals produced by these cells but it is not really understood especially in 3D skin tissue organization.
Can a modelling approach help our understanding of how keratinocytes  organize melanocytes? While there is growing knowledge of how individual cells respond from the genome through to the proteome and metabolome, it is difficult for biologists to integrate this growing body of new data and regenerate a holistic view of the organism. Computational modelling provides a powerful tool to handle this complexity, as it is capable of processing and organizing a huge amount of complex biological data, connecting experimental results to fundamental biological principles, thus improving our understanding of a complex biological system such as tissue morphogenesis and pathogenesis.
In addition to the massive literature about the complex interactions between these two types of cells, our research indicates that cell-cell and cell-substrate bonds might play a very important role in the formation of the epidermal melanin unit. The problem proposed involves developing further existing agent based keratinocyte colony formation models or developing new mathematical models to describe the complex interactions between keratinocytes and melanocytes at multi-cellular, cellular and sub-cellular levels, which should be accessible to biologists and can be linked directly with experimentations in the lab.
Zhuoyu Li, Paul Appleton, Inke Näthke
Cell & developmental Biology, University of Dundee, Dundee DD1 5EH.


MDCK and other cultured epithelial cells form “domes” when grown on rigid substrates in confluent monolayers. Domes are formed by cells that lift up from the solid support but remain continuous with the monolayer. There is a paucity of data available about the cellular pathways that control this process. When grown on flexible, semi-permeable filters, domes are not formed. The activity of the `Na^+`/`K^+`-~ATPase is crucial for dome formation suggesting that ion transport and associated water flow are important.
We have monitored domes over many hours in MDCK cell cultures and found that domes grow, collapse, and usually re-form in the same place. We have data for inhibitors that do and do not alter this behaviour. We also know that light can “poison” dome formation. Interestingly we created mutant cells that form vary domes of different size and dynamics. These mutant cells only differ in the expression of a fluorescent protein we introduced stably.
The challenge I would like to pose to the group is to develop a model that describes doming behaviour using the provided movies as source for information about the physical dimensions of domes relative to cells, the speed and patterns of re-arrangements that accompany dome formation.
The model should take into account cellular parameters that have to be altered to generate different doming behaviour in the two mutants. The kinds of parameters that should be included in the model are cell-cell and cell-substrate adhesion, the permeability of the cellular junctions, cellular stretchiness (for lack of a better term). Another question that is relevant in this context is whether, or in what way cells surrounding a dome are different from those further away or in it. Understanding the physical properties thatare involved will allow us to identify and validate cellular pathways that are involved in tissue shape changes.
Prof. Alastair J. Munro (1) and Dr. Ingeborg M.M. van Leeuwen (1,2)
(1) Department of Surgery and Oncology, Ninewells Hospital, University of Dundee
(2) Department of Microbiology, Tumour and Cell Biology, Karolinska Institute, Stockholm


Cancer therapy with drugs and/or radiation can damage the structure and functional integrity of the gastrointestinal epithelium, the extent of the damage being dependent upon a number of identifiable variables. Such damage is dose-limiting, while limitations on dose may compromise the effectiveness of treatment. There have been extensive studies on the kinetics of damage and repair of the gastrointestinal epithelium following a variety of insults. Some of these data have already been incorporated into mathematical models.

Assessing dose-limiting bystander effects in the gut epithelium
Radiotherapy protocols are generally planned based solely on the tolerance of normal tissues directly exposed to the beam. Recent experimental evidence suggest, however, that cells outside the exposure field are subject to radiation-induced bystander effects, resulting from cell-cell and cell-matrix interactions. The spatial propagation of bystander effects is particularly relevant at low doses, as under these conditions only a small number of cells suffer a “direct hit”. We propose to use mathematical modelling to quantify such DNA-damage-independent effects and estimate the resulting net tolerance of the normal tissue.

Devising optimal treatment schedules for patients with cancer
Most schedules currently used in clinical practice have been derived empirically and are employed in a standard fashion, with little account taken of patient-to-patient variation. We suggest that it should be possible, using available biological data in conjunction with mathematical modelling, to devise an approach to treatment scheduling that is more individually based and takes account of patient-to-patient variation in susceptibility to harm. In essence it may be possible to increase the intensity of scheduling for patients who are at lower risk of treatment-related gastrointestinal damage and, conversely, decrease intensity for patients considered to be particularly susceptible to the adverse effects of treatment.

# [[Alveolar Epithelial Cell Injury and Repair in Fibrotic Lung Disease|VPH 2009: Alveolar Epithelial Cell Injury and Repair in Fibrotic Lung Disease]]
# [[Modelling doming in epithelial cells: physical properties of epithelial cells that permit doming and differences in cells in domes compared to non-doming neighbours?|VPH 2009: Modelling doming in epithelial cells: physical properties of epithelial cells that permit doming and differences in cells in domes compared to non-doming neighbours?]]
# [[How do keratinocytes count melanocytes and plant them in just the right positions in the epidermis?|VPH 2009: How do keratinocytes count melanocytes and plant them in just the right positions in the epidermis?]]
# [[Quantifying radiation-mediated damage to the gastrointestinal epithelium: Applications to cancer radiotherapy|VPH 2009: Quantifying radiation-mediated damage to the gastrointestinal epithelium: Applications to cancer radiotherapy]]
# [[Tensegrity as a main determinant of tissue morphogenesis|VPH 2009: Tensegrity as a main determinant of tissue morphogenesis]]
Professor Ana M. Soto, Tufts University and University of Ulster;
Professor Carlos Sonnenschein, Tufts University
Dr Kurt Saetzler, University of Ulster,
Professor Helen Byrne, University of Nottingham


Mechanical forces are well-known to play a key role in shaping organs such as bone.  However, the precise roles of different types of mechanical stimuli on the morphogenesis of soft tissues remain to be explained fully.  The recent development of 3D culture models that recapitulate the structure of glandular epithelial structures  and their surrounding stroma has created an opportunity to observe how the tissue is formed (the shape, topology and movement of the emerging structures) and to measure the physical forces acting globally and locally. Our overarching aim is to identify the physico-chemical mechanisms that regulate  the shape of the evolving epithelial structures. We will achieve  this goal by considering the following questions: 1) what properties of  the stroma increase the appearance of spherical structures rather than cylindrical ones (and vice versa)? 2) what factors regulate branching of the epithelial mass?
<div class='toolbar' >
<span macro='toolbar [[ToolbarCommands::ViewToolbar]] easyEdit'></span>
<span macro='tagger'></span>
<div class='title' macro='view title'></div>
<div class='tagging' macro='tagging'></div>
<div class='tagged' macro='tags'></div>
<div class='viewer' macro='view text wikified'></div>
<div class='tagClear'></div>
* 2009, Jun 29- Jul3: Nottingham (UK). [[VPH 2009|VPH 2009]]

# [[Inversion of Electrical Conductivity Parameters in Double-Layered Earth with 3-Dimensional Anomalies |WIA 2009: Inversion of Electrical Conductivity Parameters in Double-Layered Earth with 3-Dimensional Anomalies]]
# [[Evaluation of Environmental Quality and Evolution in Urban Soils of Qingdao City|WIA 2009: Evaluation of Environmental Quality and Evolution in Urban Soils of Qingdao City]]
# [[Service Center Optimization (Nonlinear Integer Programming) |WIA 2009: Service Center Optimization (Nonlinear Integer Programming)]]
# [[Computational Fluid Dynamics (CFD) Modelling on Soot Yield for Fire Engineering Assessment|WIA 2009: Computational Fluid Dynamics (CFD) Modelling on Soot Yield for Fire Engineering Assessment]]
# [[Modelling PWM control of a single phase induction motor|WIA 2009: Modelling PWM control of a single phase induction motor]]
# [[Regime Changes in Non-Stationary Time-Series|WIA 2009: Regime Changes in Non-Stationary Time-Series]]
Hypothermia is used in a number of invasive cardiosurgical procedures. In limited hypothermia, a routine procedure in open heart surgery, the human body is cooled down to 32 degrees Celcius, whereas in deep hypothermia, which is used less frequently, body temperatures go down as far as 20 degrees.

The actual cooling of the body is done by circulating the blood through a heart-lung machine, in which the blood exchanges heat with cold water. Rewarming is done by the same procedure with hot water, and afterwards by contact of the body with hot air in a "bear hug". Medical complications, such a brain damage or damage to other vital organs, can arise from this process taking place too quickly. For practical purposes however, the warming procedure has to be performed as quickly as is safely possible.

The actual controls used at present are rather crude and suggestions for improving this procedure are very welcome. Moreover, the precise control of the reheating procedure, in relation to the observed data during the cooling process, calls for a much better understanding.

The goal of the studygroup is to develop a model allowing the data gathered during the cooling process to predict the time and method for reheating by circulating warm blood. In addition we also seek a better understanding of the use of hot air for warming up the body, preventing temperature drops and stabilising the body temperature.

Real data will be provided consisting of three different temperature curves, namely skin, blood and internal body temperature. 
Nederlands Meetinstituut

For the calibration of weights with various nominal masses, metrology institutes use special weight sets. For instance the decade from 1000 g to 100 g is 'covered' by six weights: 1000 g, 500 g, 2 x 200 g and 2 x 100 g. In order to be able to distinguish between weights with the same nominal mass, a small dot is used to mark one of them. So one can speak of 100 g and 100 g· .

The mass of each individual weight of the set can be determined by direct comparison with a mass standard of identical nominal mass or by using a weighing scheme. At the highest level of metrology only the national mass standard made of platinum-iridium is available. This mass standard has a nominal value of 1000 g and cannot be used for the determination of e.g. the mass of a 200 g weight by using direct comparison. So at this level weighing schemes must be used.

A weighing scheme is an overdetermined system of weighing equations. The scheme consists of several mass comparisons carried out with certain combinations of weights. In this scheme the mass standard with known mass also participates. With a very accurate balance the mass differences of the weighing equations are measured. For instance:

$$ Δ m_1 = m_{1000} - (m_{500} + m_{200} + m_{200·} + m_{100}). $$

By using least-square analysis the masses of the individual weights can be determined from the mass of the standard and the measured mass differences. With the variance-covariance matrix of the calculated masses the uncertainties of each mass can also be calculated.

There are various schemes in use. Not only for submultiples (downward series in which the masses of weights are determined from a mass standard with a higher nominal value), but also for multiples (upward series). To reduce the uncertainty each mass comparison is repeated several times and the average weighing scheme comprises 8 to 14 comparisons. This makes this method of mass determination very time consuming.

The question is which submultiple/multiple scheme is the most efficient (with regard to number of weighing equations and number of repetitions) and results in the lowest uncertainty.
Also available as a [[pdf-file|p/esgi/52/weights.pdf]] (with pictures). 
The aim of this service is to collect the descriptions of (all) the problems that have been presented in the past mathematical Study Groups around the world.

!How to use this service
Please either ''search'' or ''navigate'' the list of Study Groups and problems by using the navigation panel on the right of this page.

Every new opened item gets a tab (see above). To close it or see the other items please use the tabs above.

These pages use [[TiddlyWiki|http://www.tiddlywiki.com/]] and [[ASCIIMathML|http://www1.chapman.edu/~jipsen/mathml/asciimath.html]] to render mathematics in [[MathML|http://www.w3.org/Math/]], plus a few [[TiddlyWiki|http://www.tiddlywiki.com/]] plugins.
You may want to install the [[STIX fonts|https://www.eyeasme.com/Joe/MathML/MathML_browser_test]] if the mathematical symbols do not look satisfactory.
* 2002, Jul 8-12: Hong Kong (China). [[WIA 1, 2002|WIA 1, 2002]]
* 2006, Dec 4-8: Hong Kong (China). [[WIA 2, 2006|WIA 2, 2006]]
* 2009, Dec 7-11: Hong Kong (China). [[WIA 3, 2009|WIA 3, 2009]]

Thruster allocation
MARIN, the Maritime Research Institute Netherlands, has become a reliable, independent and innovative service provider for the maritime sector and a contributor to the well being of society.

MARIN has been expanding the boundaries of maritime understanding with hydrodynamic research for over 70 years. Today, this research is applied for the benefit of Concept Development, Design Support, Operations Support and Tool Development. The services incorporate a unique combination of simulation, model testing, full-scale measurements and training programmes.
!Problem description
Many ships working in the offshore industry are equipped with a dynamic positioning (DP) system. The position of the vessel is measured and actively controlled thrusters (main propellers, rudders, azimuthing thrusters, bow tunnel thrusters, ....) are used to keep the vessel at the desired location. The main advantage for DP vessels working in deep water is that no mooring system is required. DP systems consist of the following components; a position reference system (to determine the position error), a Kalman filter (to separate the low frequency and wave frequency motions, in real time, without phase delay), a controller (to determine the required forces FX, FY and MZ, based on the low frequency position error) and a thrust allocation algorithm (to distribute the required total forces over the available thrusters). Typically, the vessel will have more thrusters than strictly necessary, resulting in an overdetermined allocation problem. The aim of the allocation algorithm is to generate the total thrust force, while minimizing fuel consumption (power). The output of the thruster allocation algorithm are the RPM and azimuth angle settings for each of the individual thrusters. The resulting optimization problem has the mulitple challenges.