Fluence is the leading global energy storage technology and services company, created and backed by Siemens and AES, two industry powerhouses and pioneers in energy storage. Fluence unites the scale, experience, breadth, and financial backing of the two most experienced icons in energy storage.
Our mission is to create a more sustainable future by transforming the way we power our world. Energy storage is critical to this transformation, yet today the market is fragmented and customers face the challenge of finding a trusted technology partner amidst conflicting technical claims, inexperienced vendors and installers, and new market entrants with limited power sector knowledge.
Fluence brings the proven technology solutions and services that overcome the commercial and regulatory barriers that stand in the way of modernizing our energy networks. We are the partner that can deliver at a global scale with the most experienced and knowledgeable team in the world.
Role & Responsibilities
1.
Data Infrastructure & Transformation:
·
Design, maintain, and optimize data infrastructure for data collection, management, transformation, and access, focusing on scalability, reliability, and cost-effectiveness.
·
Continue to be hands-on with data integration engineering tasks, including data pipeline development, ELT processes, data integration and be the go-to expert for complex technical challenges.
·
Implement, and manage cloud infrastructure and automated workflows using AWS services (e.g., AWS - Step Functions, Batch,Glue, Athena,Lambda, EC2, Event bridge, ECS, Redshift), while optimizing existing orchestration solutions.
·
Monitor PostgreSQL performance and conduct troubleshooting to identify and resolve issues with database queries, performance bottlenecks, and availability.
·
Use Python and AWS cloud services to automate data retrieval and processing tasks.
2.
Process Improvement and Efficiency
·
Identify opportunities for process improvement in data workflows, with a focus on automation and scalability.
·
Build and manage data warehouses, data lakes, and other data storage solutions to support large-scale data operations and analytics.
·
Document technical architectures, best practices, and operational procedures for orchestration workflows and automated infrastructure.
·
Demonstrate a willingness to develop problem-solving skills by participating in root cause analysis, gap analysis, and performance evaluations.
·
Exhibit strong time management skills and attention to detail, with the ability to manage multiple tasks and priorities in a dynamic environment.
·
Show eagerness to learn and apply new data analysis techniques, tools, and methodologies.
·
Ability to thrive in a fast-paced, evolving work environment while taking on new challenges.
3.
Collaboration & Support:
·
Work closely with other team members to support ongoing data extraction and data pipeline needs.
·
Contribute to internal projects by documenting data workflows and helping with ad-hoc data pull requests.