Role

This role will be responsible for developing Big Data Applications using Java and Hadoop or similar Big Data Technologies

  • Design and develop new features and enhancements on our Big Data Analytics platform
  • Develop application specifications and designs which are scalable, extensible, maintainable and testable
  • Build systems, libraries, and frameworks within, around, and on top of Hadoop or similar
  • Utilise frameworks and extensions to Hadoop such as Cascading
  • Design and implement Map/Reduce jobs to support distributed data processing
  • Process large data sets utilizing Hadoop clusters or similar
  • Implement and test by authoring automated unit and black-box tests
  • Work in small teams where each team member has a lot of ownership and each individual can make a big impact

 

Skills

  • 3+ years of experience developing and maintaining complex web software applications
  • Excellent object-oriented programming ability, an appreciation for loosely coupled architecture, and a clear understanding of best practices
  • Familiarity with Linux, J2EE, HTML, Hibernate, and Spring MVC
  • Familiarity with services like Hadoop, Map/Reduce, Cascading, Hive, Rackspace Cloud Servers and Amazon EC2, EMR, S3, Storm, Kofka, flume, Nutch, Dremel, BigQuery, R, JUnit would be a strong plus
  • Working knowledge of internet protocols (e.g. HTTP), relational databases, and multithreading
  • Self-directed, with demonstrated problem solving skills
  • Strong written and oral communications skills

 

Educational Background

B.Tech /M.Tech in computer science from a top tier educational institution