Pivot Bio is the leading nitrogen innovator providing farmers and the world with a better nitrogen for improved productivity and sustainability
About Pivot Bio:
Fueled by an innovative drive and a deep understanding of the soil microbiome, Pivot Bio is pioneering game-changing advances in agriculture. Our products harness the power of naturally occurring microbes to provide nutrients to crops and new sustainable ways for farmers to reduce the usage of fertilizers as they work to help feed the world’s growing population.
Read/Hear more about Pivot Bio on Forbes or PBS News Hour.
Position: Data Warehouse Engineer
Pivot Bio is seeking an experienced Data Warehouse Engineer with expertise in Python and Databricks to join our fast-growing Data Platform team. The Data Warehouse Engineer will play a key role in making data actionable at all levels of the organization to inform decisions, actions, and strategy. You will work cross-functionally across business units to ensure our systems are functioning as expected, implement functional improvements, and empower business users to use BI tools to gain valuable business insights. You will be accountable for data quality, modeling, enrichment, and availability. You will also manage and monitor Master Data Management (MDM) integrations between business systems and be part of larger integration activities.
People who excel on our team hold themselves to a high standard of quality while working independently. They give and receive honest feedback. They are empathetic, proactive, and passionate. They think exponentially and embrace a fast-paced and high-growth environment. If that sounds like you, come join us and be a part of the solution to climate change.
Responsibilities:
- Design and build Databricks data warehouses to support business needs and data analysis requirements using Python, and Fivetran
- Develop and maintain ETL processes to load data from various sources into the data warehouse using Fivetran, Python, and Databricks
- Improve Enterprise Information Management performance by leveraging MDM, metadata management and information modeling
- Design, document, and maintain logical data models including data definitions and data standards
- Develop and maintain data pipelines using Python, Databricks, and Fivetran to ensure data accuracy, completeness, and consistency
- Optimize data warehouse performance and scalability to meet business requirements
- Monitor data quality and take necessary actions to ensure data integrity
- Collaborate with data scientists, engineers, and stakeholders to understand business requirements and translate them into technical solutions using Python, Databricks, and Fivetran
- Communicate technical solutions to non-technical stakeholders in a clear and concise manner
- Keep up to date with industry trends and best practices in data warehousing and business intelligence
Physical Demands:
- Job will involve mostly office and computer-based work
- Repeating motions that may include the wrists, hands, or fingers
- Sedentary work that primarily involves sitting/standing
- Communicating with others to exchange information
Qualifications and Experience:
- Bachelor's or Master's degree in Computer Science, Information Technology, or related field or equivalent industry experience
- 5+ years industry or educational experience with 2+ years in data warehousing and business intelligence with an emphasis on using Python, Databricks, and data movement technologies like Fivetran
- Proficiency in Python programming language, SQL, ETL tools, and data modeling
- Strong knowledge of data warehousing concepts, techniques, and best practices
- Experience with cloud-based data warehousing solutions
- Experience with data pipeline and workflow tools
- Experience with Databricks for data processing, data engineering, and data analytics
- Experience in automated data integration and replication using tools like Fivetran
- Excellent communication skills and ability to work collaboratively in a team environment
- Strong analytical and problem-solving skills
Additional Qualifications:
- Familiarity with data visualization tools such as Tableau, Spotfire, or Power BI
- Experience with distributed computing frameworks such as Hadoop or Spark
- Background in science, research, or agriculture
- Familiarity with Amazon Web Services (AWS) or other cloud computing technologies
- Experience provision infrastructure using Terraform or other IaC tools
*Must be authorized to work in the United States
What we offer:
- Competitive package in a disruptive startup
- Stock options
- Health/Dental/Vision insurance with employer-paid premiums
- Life, Short-Term and Long-Term Disability policies
- Employee Assistance Program with free referrals and discounts
- 401(k) plan, 3% Match
- Commuter benefits
- Annual Training & Development support
- Flexible vacation policy with a generous holiday schedule
- Exciting opportunity to work with a talented and fun team
Hiring Compensation Range: $115,000 - $144,000
All remote positions and those not located in our Berkeley, Hayward or Boston locations are paid based on National Benchmark data. Following employment, growth beyond the hiring range is possible based on performance.