Detailed Job Description:
· Experienced professional with 10-12 years of experience developing and implementing statistical models in Big Data ecosystem, i.e., Hadoop, Spark, HBase, Hive / Impala or any other similar distributed computing technology as well as public cloud platform & systems
· Proficiency with Python/R and basic libraries for statistical/econometric modeling such as Scikit-learn, Pandas
· Experienced in Hadoop, Snowflake, Spark, HDFS, Python, R, PySpark
· Proficiency with DataIku, DataBricks or similar AI/ML tools.
· Proficiency in data analysis using complex and optimized SQL and/or the above-mentioned technologies
· Understanding of data structures, data modeling and software architecture
· Good written and verbal communication skills Proficiency / Experience with the following a plus:
· In-depth understanding of Statistics and Mathematics
· Finance, Mortgages, Bank Deposit Products