Since 1918, it has been TIAA’s mission to serve, our ability to perform and the values we embrace that make us a different kind of financial services organization. We’re dedicated to serving the financial needs of those in the academic, medical, cultural, governmental and research fields, and committed to helping make lifetime financial well-being possible for them.
By building a culture that allows all employees to contribute their unique talents and skills, we’re able to provide our customers with fresh ideas and distinct perspectives to help them achieve their goals. We believe a diverse and inclusive workforce is one of our greatest strengths and a key measure of our success*.
For more information about TIAA, visit our website.
TIAA’s Production Services team manages our core infrastructure assets. Our team is the first line of support on any production related issues including our network and telecommunications, computing (middleware), and end-user support, as well as the firm’s 24×7 data center operations. The Production Services team collaborates closely with our business-aligned partners in technology and with key stakeholders across the enterprise.
As a Lead Engineer you will have the opportunity to engineer and administer TIAA’s big data environment. Your role will be responsible for administering our Hadoop and No-SQL ecosystem components such as HDFS, Hive, MR, Yarn, Impala, Spark, Sqoop, HBase, Sentry, Hue and Oozie. Your role will design and implement automated processes, research database technologies, communicate effectively with database administrators and application stake holders to ensure your internal clients’ needs are met.
KEY RESPONSIBILITIES AND DUTIES:
- Responsible for the implementation and on-going administration of Hadoop infrastructure including the installation, configuration and upgrading of Cloudera distribution of Hadoop
- File system, cluster monitoring, and performance tuning of Hadoop ecosystem
- Resolve issues involving map reduce, yarn, sqoop job failures; Analyze multi-tenancy job execution issues and resolve
- Design and manage backup and disaster recovery solution for Hadoop clusters
- Work on Unix operating systems to efficiently handle system administration tasks related to Hadoop clusters
- Manage the Apache Kafka and Apache NIFI environments
- Participate and manage the data lakes data movements involving Hadoop, NO-SQL databases like HBase, Cassandra and Mongodb
- Work with data delivery teams to setup new Hadoop users. Includes setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig and Map Reduce access for the new users. Configure Hadoop security aspects including Kerberos setup and RBAC authorization using Apache Sentry
- Create and document best practices for Hadoop and big data environment
- Participate in new data product or new technology evaluations; manage the certification process and evaluate and implement new initiatives in technology and process improvements
- Interact with Security Engineering to design solutions, tools, testing and validation for controls
- Evaluate the database administration and operational practices, and evolve automation procedures (Using scripting languages such as Shell, Python, Chef, Puppet, CFEngine, Ruby etc.)
- Advance thecloud architecture for data stores; Work with TIAA Cloud engineering team with automation; Help operationalize Cloud usage for databases and for the Hadoop platform
- Engage vendors for feasibility of new tools, concepts and features, understand their pros and cons and prepare the team for rollout
- Analyze vendor suggestions/recommendations for applicability to TIAA’s environment and design implementation details
- Perform short and long term system/database planning and analysis as well as capacity planning
- Integrate/collaborate with application development and support teams on various IT projects
- Bachelor’s degree; Preferably in Computer Science or Information Systems
Ten or more years of overall IT/DBMS/Data Store experience, preferably with a background in Oracle database engineering
Three or more years of experience in, big data, data caching, data federation and data virtualization management with experience in leveraging Hadoop and/or No-SQL preferred
Two or more years of expertise and in-depth knowledge of SAN, system administration, VmWare, backups, restores, data partitioning, database clustering and performance management
Experience writing shell scripts, and automating tasks. Exposure to Chef or/and Puppet is preferred
Experience in the implementation details of Hadoop Clusters, Impala, and HBase and other emerging data techniques
Experience with monitoring technologies for databases
Experience with orchestration techniques, infrastructure automation and cloud deployments
Understating of Linux, Windows, Dockers / containers
Familiarity with ‘IaaS’ and ‘DBaaS’ Service oriented concepts preferred
Familiarity of Cloud Architecture (Public and Private clouds) – AWS , AZURE preferred
Working knowledge of VMware and VMware vCloud Automation Center (vCAC) preferred
Proficiency in using Microsoft Office (Word, Excel, PowerPoint) to document, present, communicate and articulate idea/s and concepts
Strong communication skills and the ability to collaborate and work in teams with other engineers, working in a fast paced and ever changing technical environment
Application development experience – database programming, scripting, setting up web sites and dashboards
Equal Employment Opportunity is not just the law, it’s our commitment. Read more about the Equal Employment Opportunity Law.
If you need assistance applying due to being visually or hearing impaired, please email Careers Help.
We are an Equal Opportunity/Affirmative Action Employer. We will consider all qualified applicants for employment regardless of age, race, color, national origin, sex, religion, veteran status, disability, sexual orientation, gender identity, or any other legally protected status.
*©2016 Teachers Insurance and Annuity Association of America (TIAA), 730 Third Avenue, New York, NY 10017 C23921