Principal Software Engineer

🔍 Get help on your software engineer job search

Job Description

What is Sylvera anyway? ‍👩‍👨🌳

Sylvera provides carbon data for genuine climate impact. Our mission is to incentivize investment in real climate action.


Purchasing credits through the carbon markets is one of the most established and scalable ways to channel finance from the private sector to effective climate solutions and work toward societal net zero. Unfortunately, the voluntary carbon markets have been plagued with mistrust and a lack of effectiveness since they’ve emerged – until Sylvera.


To help organizations ensure they're making the most effective investments, Sylvera builds software that independently and accurately automates the evaluation of carbon projects that capture, remove, or avoid emissions. With Sylvera's data and tools, businesses and governments can confidently invest in, benchmark, deliver, and report real climate impact.


Our team is made up of leading minds in climate change from scientists to policy, finance and carbon market experts. We work in partnership with scientific organisations, universities, governments and think tanks to develop and test rigorous and holistic ratings methodologies, leveraging the latest technology. Founded in 2020, Sylvera has 150+ employees across the world with offices in London, New York, Belgrade and Singapore. We’ve raised over $96million from leading VCs like Balderton Capital, Index Ventures and Insight Partners to date.


What will I be doing? ‍‍👩‍💻👨‍💻


We’re looking for mission-driven, Principal Software Engineer to join our tech function and bring a vision to our data and backend architecture. Specific responsibilities will include:


- Architectural Oversight: Oversee the architectural decisions around our new data mesh, ensuring modularity, scalability, and robustness. Prioritise domain-driven design, aligning software and data architecture with evolving business domains.

- Hands-on Development: Engage in deep-dive code reviews, contribute to critical code paths, and demonstrate best practices in implementing data product APIs, with a focus on technologies like GraphQL.

- Decentralised Data Governance: Champion a decentralised approach to data ownership and stewardship. Foster an environment where teams autonomously maintain, monitor, and ensure the quality of their data products while adhering to company-wide standards.

- API Design, Evolution & Integration: Take the lead in designing, refining, and evolving APIs, especially GraphQL schemas, ensuring data is efficiently exposed, accessed, and integrated across multiple domains.

- Performance, Reliability & Scalability: Ensure that data products offer high availability, low latency, and scalability. Regularly conduct performance audits, stress tests, and scalability assessments, driving necessary optimizations.

- Data Integration & Warehousing in Mesh Environment: Champion modern ETL tools and processes within the data mesh ecosystem, ensuring each data product is well-integrated and the broader mesh functions seamlessly. Guide decisions on data storage, warehousing practices, and cloud-native data solutions.

- Cross-Functional Collaboration: Foster tight collaboration between data product owners, platform teams, and domain teams. Facilitate knowledge sharing, best practices, and cross-pollination of ideas to drive the evolution of the data mesh.

- Mentorship & Skill Development: Regularly upskill the engineering team on data mesh principles, modern API technologies, and best practices, ensuring consistent growth and development.

- Continuous Learning & Ecosystem Influence: Stay abreast of emerging best practices, tools, and platforms. Engage with the broader data community, contributing insights, and bringing back valuable knowledge to the organisation.


We’re looking for someone with: 🧠💚


- An interest in making a difference through climate action.

- A passion for the protection of the climate and ecosystems of the earth

- The mindset of a self-starter who thrives in constantly evolving environments, ideally with early-stage experience

- Extensive Experience in software engineering with a focus on data-intensive environments, out of which at least 2 years are in architecting or leading data products. Proficiency in Python development.

- Technical Expertise in Data Products: Hands-on experience with designing, deploying, and managing data products in a mesh environment. Experience working with data warehouses such as Snowflake as well as data lakes.

- Deep Understanding of APIs: Proficiency in designing and implementing APIs, especially with GraphQL, RESTful services, and microservice architectures.

- Mastery of Data Modeling: Expertise in domain-driven design and the ability to craft robust data models that cater to business needs in a decentralised data product landscape.

- Strong Architectural Acumen: A demonstrated history of making high-level design choices, dictating technical standards, and driving architectural governance.

- Cloud Data Solutions: Deep familiarity with cloud-native solutions and services such as AWS, and their respective data-oriented offerings.

- Decentralised Data Governance: Experience in implementing or working within a decentralised data ownership model, emphasising domain-based ownership and stewardship.

- Performance Tuning: Proven ability to diagnose performance bottlenecks and strategize optimizations in large-scale data systems.

- Soft Skills: Strong leadership and interpersonal skills, with the ability to influence across functions, mentor junior members, and drive collaboration.

- Problem-solving Prowess: Demonstrated ability to dissect complex problems, provide clear strategic direction, and devise effective technical solutions.

- Business Acumen: A keen understanding of how data-driven decisions impact business outcomes, and the ability to align technical strategies with business objectives.


At the core of our team's ethos is the drive to create and execute. While being able to drive tasks to completion is paramount, it's equally critical for our Principal Engineer to own their deliverables with a deep sense of accountability. We understand that in the fast-paced world of startups, perfection can be the enemy of progress. Hence, we're looking for someone who can adeptly make the right trade-offs between performance, quality, and time — ensuring that our solutions are robust and efficient, but also timely in meeting our business needs and objectives.


Your ability to prioritise, pivot when necessary, and deliver with both speed and excellence will be instrumental to your success and the company's growth.

Apply to Job

👉 Please mention that you found the job on ClimateTechList, this helps us get more climate tech companies listed here, thanks!

Get a referral to Sylvera

If possible, try to get a warm intro/referral to Sylvera before applying! Do a LinkedIn search to see who you may know at the company. See this LinkedIn post from Steven for more details on this tactic.

All job openings from Sylvera

Join ClimateTechList Talent Collective

Want to be matched with companies directly? Apply to the talent collective.

Here's how it works:

  1. You submit an application

  2. We'll share your profile with climate tech companies potentially interested in chatting with you

  3. We'll reach out if there's a company interested in talking to you.

Join ClimateTechList Talent Collective

Want to be matched with companies directly? Apply to the talent collective.

Here's how it works:

  1. You submit an application

  2. We'll share your profile with climate tech companies potentially interested in chatting with you

  3. We'll reach out if there's a company interested in talking to you.