The amount of data has grown exponentially in the last few years, and it will only continue to grow. Most Insurers are struggling with “What to do with these new data?” Has anyone consider that traditional analytics/modelling tools can no longer cope with Big Data efficiently?
IBM has been marketing their Cognitive Platform Watson heavily, in most ads, you would have seen Watson having an intelligence conversation with a celebrity. One key value adds of Watson, and often overlooked by many, is its ability to analyse both structured and unstructured data at the same time, it revolutionises how data could be analysed, understood and correlated.
There is a reason why IBM acquired the Weather company last year, and formed an exlusive partnership with Twitter, giving Watson more data sources to play with. Perhaps, Cognitive computing can also change the way catastrophe modelling works soon…
I see two main developments driving the evolution of cat modelling over the next few years: big data, and the emergence of new model developers, including collaborative, open-source initiatives. The increasing volume of data from a widening variety of sources and flowing with greater frequency – aka big data – is having significant implications across all kinds of fields, including cat modelling. For instance, when we evaluate a model, we now have significantly more and better data from academics, government agencies, and various other sources.