Terror Threat Levels and Self-Exciting Functions

The UK is suffering a dark few months; tourists and a policeman were killed in an attack outside parliament; a suicide bomber targeted the end of an Ariana Grande concert in Manchester; a trio of thugs murdered those enjoying Saturday night on London's vibrant South Bank. Terrorism is not new to the UK but it is only since August 2006 that the current system of threat levels has been published and regularly reviewed.

There are five possible levels as explained in this screenshot from the MI5 website:


The descriptions are not particularly helpful in my opinion. I don't think there's an objective difference between "unlikely" and "possible, but not likely" although that point is moot because the levels LOW and MODERATE have never been used.

For nearly three years before the Manchester bombing, the threat level was SEVERE. The day afterwards (23rd May) it was increased to CRITICAL. To many this may have felt like shutting the stable door after the horse had bolted. Four days later (27th May) the threat was downgraded again to SEVERE as it has since remained. The London Bridge attack occurred on 3rd June.

It seems reasonable to ask two questions:

1) Why was the threat level raised after the Manchester attack and not the two London incidents?
2) Why is the level raised after an attack when surely common sense would suggest that attacks are reasonably spaced out over time?

I suspect the answer to the first lies in the nature of the murders. The London terrorists needed no support network; only the ability to hire a vehicle and arm themselves with blades. The Manchester bomber, however, was in possession of a far more sophisticated device that required significant expertise to assemble and detonate.

The second question is more interesting from a mathematical perspective. It concerns how the frequency of terrorist attacks can be modelled in time. Analysing data has led Statisticians to notice that in the period shortly following a terrorist attack, a second attack is more likely. They are rather like earthquakes in that manner; a fault can lie resting for a decade and then a large earthquake is followed by several aftershocks. The events seem to be self-perpetuating.

Mathematicians use Hawkes Processes to understand these phenomena; they emulate self-exciting events and are named after Alan Hawkes, professor of Statistics at Swansea University.

A.G.Hawkes
He came up with the idea for a function that reacts to events; increasing in value during the period shortly after each. I shall illustrate how this works using numbers plucked from nowhere and just designed to make things look reasonably nice.

Suppose we lived in an extremely volatile country where long term data analysis suggests that there is approximately one terrorist attack in every twenty June days. Suppose we let p(t) be the probability of an attack happening on any one day under normal circumstances. This gives a graph that is rather dull in its uniformity; the threat of a terror attack is identical on each day.




Now suppose there was a terror attack on 5th June. According to past observations, this will increase the daily probability of attack for a little while. The additional threat will subside over time. We can use a decay function to simulate the effect of this event on the likelihood of subsequent events.


In the above function the number 5 in the exponent represents the day of the event, 0.4 represents the likelihood that one attack will yield another, and 0.25 is the reciprocal of the length of time we'd expect to wait until the second linked attack (which is 4 days in this case).

Suppose there was indeed a further attack after four days; perhaps one that had a 10% chance of yielding another attack. Then all we would do is add another term to take effect from June 9th.


The more events there are, the more terms you add, and the longer it takes to get back near the background threat level:


Of course, you can't really use this to predict when a terrorist attack will occur. It merely gives us a heightened understanding of the underlying randomness but it does explain why it is worth raising threat levels in response to major attacks. It might also help us to determine how to deploy resources within the emergency services.

A spatial dimension is added by the company, Predpol, to help US law enforcement agencies direct their patrol officers. They harvest crime data and produce hot-spots where crime is likely to occur. Police are sent to the areas and their presence deters potential offenders. As one police chief puts it, "Burglars and thieves work in a mathematical way, whether they know it or not". The efficacy of the system is nothing short of remarkable. In Alhambra, California, burglary frequency fell by 32% and car theft by 20% during the first year of use.

It is perhaps the randomness of terrorism - very violent events happening to ordinary civilians without warning - that is its most significant feature. I find, therefore, a crumb of comfort that there is an understood underlying shape to these events and that authorities can use this information to make decisions about our security. The London Bridge attackers were shot dead by armed police within eight minutes of the first 999 call; while little consolation to the families of those affected, it was an astonishingly fast resolution in an enormous city. We have data analysts to thank for having the right people in the right place at the right time.




from matheminutes http://ift.tt/2qXVsQW
SHARE

Unknown

    Blogger Comment
    Facebook Comment

0 التعليقات :

Post a Comment