As Web(No Text) 700x500

Digital Disruption Series: Corey Manders

Corey Manders heads the R&D in Singapore. Prior to joining OneConnect, he was a Research Scientist and Programme Manager with The Agency for Science, Technology and Research (A*STAR) - a catalyst, enabler and convenor of significant research initiatives among the research community in Singapore and beyond. Dr. Manders obtained his PhD in Computer Engineering/Image Processing from the University of Toronto. OneConnect Financial Technology Singapore officially opened in November 2018 and has about 120 staff members.

Tell me about your R&D department and your hiring methods to build it so far.

I lead an R&D team of about 12 engineers, likely to reach 20 this year, and we have groups of experts currently working on products within Neuro-Linguistic Processing (NLP) and chatbots; Optical Character Recognition (OCR); intelligent social media; and speech and text. One of the first few tasks I took upon was on the hiring and building of the team as finding the right people takes time.

While I do hire experienced engineers, I am also open to hiring fresh PhD and Masters graduates, who have the potential to grow. I like to hire Singaporeans because I think the R&D lab in Singapore can be used to grow local talent. I also look for people who show passion or an experimental attitude – for example, when I want to hire an engineer in Machine Learning, I don’t want to endlessly hear about Big Data extraction experience, I want to see a bit of a ‘hacker’ mentality. Mindsets over skillsets, in a way. The people I have are inspiring and engaged and we give them the freedom to express themselves. One of the team members has a background in Data Analytics but is also interested in OCR and image processing. We gave him the freedom to work on his other interest areas when he has produced some amazing results and his schedule allows for it.

How does it work in terms of timelines on projects from your R&D department to delivery to the Product team?

Each product has its own roadmap to deployment by the Product team and the timelines vary. The timeline can be very short for what we do, as it depends on how urgently it is needed. This is especially so if it’s a mature product already underway in China that needs transitioning for another market. We are located in the same office as the Product team and work closely with them, but generally we develop the core tech or the engine of a product in prototype form and then turn over the code via our server to the Product team who would then develop it further and undertake testing. Some products are long-term visions and are more Research than Development, some are short term where the technology is being adopted now and as such are more Development than Research.

Which programming languages do you use most in R&D?

Most of what we do is in Python. Everyone in R&D is well versed in Python and it works well in a lot of things. I still read and write code, and sometimes go back to Swift on occasion for speed, which I did recently for an OCR prototype I was working on. I also have experience in C or Objective-C for apps. The Product Development team are really good at writing in Swift, Objective-C, JavaScript, React and React Native, but in R&D, Python is really strong. There’s no real requirement for us to know Hadoop or any others.

How have Machine Learning and NLP changed over the last couple of years?

What was not even possible 10 or 12 years ago in machine learning is now commonplace. When I first started at A*STAR, if someone had asked for a program that could recognise a cat in an image, it would be impossible because there were too many obstacles to achieve that. Now I can train a neural network to do not just that, but to also recognise every cat however hard they are to see. In the last five years, the breakthroughs in deep learning and neural nets have been amazing.

NLP now is incredible compared to 10 to 15 years ago and it is going to grow tremendously. There are amazing things you can do now with NLP. For example, a language model called GPT-2 by OpenAI has been trained to predict the next word in any text, producing coherent passages of writing, completely unsupervised. It wrote an entire piece about the discovery of unicorns in the Andes which was really well done. The language model is so sophisticated that they are only releasing a smaller version due to concerns about its potentially malicious usage. The impact of something like this on chatbots in customer service would be almost like talking to a human. Then, taking it to the next level, in areas like Southeast Asia where people use Singlish or multiple languages in one sentence, NLP will be able to process that.

What is your vision and what are some of the things you hope to achieve or implement this year?

Everyone that joins the R&D lab and the Product team knows that the end goal is to create technologies to bring to the market, so in R&D we have KPIs and goals to hit this year in terms of products. It’s fine having a long-term vision, but we also have to address the short-term timelines in place, while pushing things in the right direction to achieve that vision.

We are constantly exploring the real-world problems that people are concerned with and one of the areas we can make a difference is in Fintech. Technology makes Fintech easier. With technology a lot of Fintech systems can be automated and are easier to use, like call centres for example. Open banking APIs have also seen the birth of apps like Cleo. While the app can’t complete banking transactions, it can analyse your spending or implement savings goals like a personal automated financial advisor. Chatbots have been widely adopted, but these other Fintech apps are the way forward.

It is my goal by the end of the year to have significant projects we can demo in our user experience room. We have to invent the future ourselves!

Posted almost 5 years ago
About the author:
Felix Fang

Our blogs and insights help keep you up to date with market developments and regional news