Expert insight – Paul Jackson, Head of Advanced Analytics, Bonamy Finch

Can you tell us a bit about yourself, your background and how you came to be where you are career-wise?

I started my career in 2001 at Research International’s Marketing Science Centre, following a degree in Sociology and Social Policy. Back then I was working mainly on analysis of market research surveys. Over time I built up my expertise in branding and segmentation and then joined Bonamy Finch, providers of advanced analytics services to global clients around the world, in 2007. Back then we were mainly serving clients and agencies with statistical analysis and support on surveys, mostly Segmentation, but also key driver analysis on branding studies and conjoint.

What we’re doing has changed over time. The role of our analysts cover three areas now – survey analytics, multiple source analytics and customer analytics. The last two require data scientists to work across diverse data streams, building segmentations and predictive models from multiple sources of data, which can then be deployed directly on client servers themselves and built into their ecosystems, or stored by us for continued database enrichment.

My role is head of Advanced Analytics at Bonamy Finch. I head up a team of survey analysts and data scientists – clever analytical people who have a strong commercial focus. Utilising our expertise in researching people, fusing data sets and advanced analytical capabilities, we pride ourselves on providing analytical insight that delivers better business decisions.

When you say they’re commercially focused, what does that look like for you?

It is really important to understand the client’s objectives, making sure that you can deliver a model or an insight which can be actioned by them and that helps them move their business forward. We ensure we recruit the right people, to ensure they are commercially minded as well as being really strong analytically.

A lot of statisticians talk about techniques and predictions. But for us, it’s important to select the appropriate methodology to enable our client to make decisions, rather than prediction being the focus.

For example, if you take key driver analysis on customer satisfaction. Imagine a Venn diagram that shows the relationship between “knowledgeable staff”, “helpful staff” and overall satisfaction. These two statements are likely to be very related. Some statistical techniques that aim to predict customer satisfaction just give you a ‘first past the post’ output. It might be that “helpful staff” is slightly more related to overall satisfaction, and a regression gives it a high importance, but because the model doesn’t gain much predictive power from adding the other statement, it shows it as unimportant, which just isn’t the case. If you deliver a model that says helpful staff is important but knowledgeable staff isn’t, then it gives the wrong commercial focus.

So the first element of commercial focus is the ability to deliver the right techniques for the task at hand, and to know what the results actually mean for clients, to deliver actionable analytics for better businesses decisions.

Commerciality also means a consultative mindset, by which I mean being able to have discussions about business issues and being very much focused on what the challenges are, so that recommendations can be made. When we’re talking about database attribution of segmentations, for example, it’s about having the ability to talk to people within the business to try and uncover relevant data sources, or source external publicly available data, or third party sources which will aid the ability successfully tag our client’s database.

You’ve talked a couple of times about collecting data from multiple sources. How much of a challenge is it to join all of that together in a way that’s meaningful?

It varies by client, and GDPR is live now.  A lot of companies are nervous about that but essentially it’s about just having good practices in place, which we have had for a long time now. I’ve seen a longer trend over the last six or seven years towards clients collecting more data themselves, utilising data from third parties and, importantly, greater willingness and ability to share that with other agencies. GDPR has created a requirement to sign a few more T&Cs, data protection agreements and more scrutiny over making sure all the right practices are in place…which from a consumer perspective, is overdue and very welcome.

Since 2007 we’ve seen a progression from segmentation being just attitudinal segmentations and needs-based segmentations, to designing hybrid segmentation programs which incorporate multiple sources, like customer transactional databases, marketing response, website interactions, or third party data. This has taken segmentation activation to the next level – being able to tag customers with segment codes in order to further enhance tailored communication and improve customer engagement.

Once you’ve seen the power of analysing multiple data sources together, it’s very difficult to go back to looking at single datasets in isolation.

How much of an issue do you think GDPR is going to be for data science?

GDPR is quite rightly about good data practice – permission, access, storage and the ability to use the data. I don’t actually think many consumers are going to be phoning up and asking for their data to be removed, but you never know what will happen. Agencies and companies need to ensure all of the practices of your third party data providers are in place and up to date. It may make it more difficult for some analytics as more clients ensure they are ready for GDPR.

I think the amount of data that companies are allowed to use and share, may take a bit of a hit at the beginning because people are nervous about opt-in. However, clients will have a more active set of consumers to engage with. There may be a year or two of a settling-in where most companies are still finding their feet, but as far as we’re concerned it’s important to continue to be on top of privacy policies and making sure that our clients are sharing their data with us in GDPR compliant ways and making sure that we’ve got everything in place.

Do you worry that some of the data scandals that have been reported lately might damage people’s perceptions of legitimate analytics as well?

It’s obviously not ideal. The media is going to report these breaches and I think this will raise concerns amongst consumers. It’s important to make consumers aware of when, how and why their data is being collected, and the benefits of this data exchange to them. Many consumers would much rather be sent communication and messages around the right products and the right offers for them and more relevant than just the random promotions that many of us currently receive.

I’m hoping GDPR will be good for consumers, putting them in control of what they want to share.

Have you noticed any other changes over the course of your career in the way that analytics operates or client demands?

The software and the technical hardware side has advanced. If I go back to my early career, I was using was SPSS and Excel and using syntax and trying to automate that as much as possible. However, there’s been a trend towards R and Python, SQL, mainly driven by the increase of data size and analytical requirements. The analysis ability of R has always been strong for stats but Python has really caught up, and the advantage of Python is that it can do much more in terms of interaction with different platforms and software. Given the amount of work we are now doing with client databases, website data, 3rd party appending, social and digital data, we have definitely shifted towards recruiting data scientists with skills in open source solutions.

We build our own software using R and Python to give us the flexibility to deliver more powerful solutions. As an example, it might be that we have two or three really different inputs into a segmentation, and that naturally one of them is dominating the segmentation. The software we’ve built provides the flexibility to upweight the other two inputs to make sure that there’s a good contribution from them all.

What kind of advice would you give for someone who was thinking of entering analytics in terms of skills development and that kind of thing?

It’s now important to be comfortable with at least the basics of open-source software, and not be scared by programming. I think that we are at a crossroads at the moment. A lot of survey analytics is still done in SPSS. Using packages like R & Python will no doubt become more user-friendly over time, and the key skill we move from coding / programming to flexibility of using multiple analytical tools.

The best analysis comes about not using by the same techniques, but understanding when there are different techniques available and different options within the stats which can get to a better model or better outcome for the client. Flexibility, commercial awareness and the design of the study or the analytical piece are becoming more important.

If it’s advice for graduates I would say that statistical analysis now requires a bit of programming, so an understanding of computer science and IT is also a benefit nowadays rather than just knowing stats.

Do you see R and Python as a threat to SPSS? Or is it something that would work kind of as an enhancement or in partnership with?

If SPSS doesn’t continue to incorporate R and Python, then possibly. However, we’re a long way from SPSS being completely redundant. The last two people we recruited had strong R and Python skills and we felt that was more important. We’ve trained them on SPSS and they’ve been able to pick that up easily.

Download your free copy of our Understanding Significance Testing white paper
Subscribe to our email newsletter today to receive updates on the latest news, tutorials and events, and get your free copy of our latest white paper.
We respect your privacy. Your information is safe and will never be shared.
Don't miss out. Subscribe today.
×
×
WordPress Popup Plugin
Scroll to Top