IBM Big Data Engineer in Boston, Massachusetts
We live in a moment of remarkable change and opportunity. The convergence of data and technology is transforming industries, society and even the workplace—by creating professions that didn’t exist before the emergence of data, cloud, social and mobile. IBM Global Business Services is a leader in this worldwide transformation and just the place to define and develop your consulting career. Whether it’s business consulting, sales, project management or a technical path, you’ll have the opportunity to make an impact on the world by working to solve some of society’s most complex problems with your original thinking and ideas. As an IBMer, you’ll innovate in pursuit of higher value in everything you do, all while being guided by IBM’s purpose—to be essential. Essential in your leadership and dedication to building valuable client relationships with groundbreaking work. Essential in uncovering what’s possible and helping global clients succeed. Join us as we make the most of these exciting times and discover what you can make of this moment. What will you make with IBM?
Do you want to be part of a cutting-edge solutions team that bring together IoT/data? Would you like to join a team focused on increasing client satisfaction by delivery results working with high performing team members. As a Big Data Engineer; you will be responsible to design and develop multiple big data utilities that automate various aspects of data acquisition, ingestion, storage, access and transformation of data volume that scale up to petabytes. You will be part of a high octane, multi-disciplinary team working on Data Acquisition, Ingestion, Curation, Storage teams, etc. You will be a hands-on developer partnering with team leads, technical leads, solution architects and data architects
Design and build data services to auto create Hive structures, HDFS directory structures and partitions based on source definitions using a configuration and metadata driven framework
Design and build custom code for audit, balance, and controls, data reconciliation and entity resolution; Create custom UDFs if needed to do complex data transformations
Create technical specifications, Unit test plan/cases and document unit test results
Perform Integration testing for the end-to-end data pipelines
Required Technical and Professional Expertise
2+ years of deep working knowledge in use of open source tools such as; Hive, Sqoop, etc.
2+ years of coding experience with SQL & Linux Bash/Shell Scripting
2+ years of coding experience with Java, Mapreduce, Pig and Python
4+ years of hands on experiences and deep understanding of Linux scripting
Familiarity with Big Data concepts, Data Lake development, Business Intelligence and data warehousing development processes and techniques
Experience working with Cloudera Manager, Navigator, Impala and security tools (Sentry, Gazzang)
Preferred Tech and Prof Experience
- Experience or knowledge with the following: SPARK, Kafka, Flume, MYSQL, NoSQL, Cassandra, Neo4J, MongoDB, Hortonworks, Big Insights, Cloudera
IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
- IBM Jobs