Who we are:
Wise Systems builds autonomous dispatching and routing software that is used by some of the world’s largest fleets to improve both fleet efficiency and customer service. At Wise Systems, we believe that companies don’t have to sacrifice efficiency to dramatically improve their customer service, and we’ve built the platform to transform delivery operations.
Based in Cambridge, MA, Wise Systems was started out of MIT and is growing rapidly, building an incredible, diverse team that shares a deep commitment to our customers’ success. Wise also has a highly engaged network of advisors, mentors, and investors passionate about enabling new standards and capabilities in delivery and logistics through engineering and data science.
As we continue to grow we’re excited to bring together people who are curious, ambitious, and creative. If you are excited about solving real-world problems and building powerful and usable products, we need you.
What we are looking for:
Wise Systems is looking for a Data Engineer to work on a team responsible for building and managing our data and data pipeline architecture. This data-driven engineer will be a team player focused on supporting existing data reporting pipelines that provide critical reports to Wise Systems customers on a daily basis.
The ideal candidate will have experience building and maintaining data architectures in support of reporting platforms in a production environment. Our team is looking for someone with a strong problem solving background who has demonstrated ability in developing effective novel solutions in both well-defined and more open-ended problem spaces.
- Maintain and build robust and scalable data integration (ETL) pipelines using SQL, EMR, Python and Spark
- Experience developing, deploying, and managing microservices
- Build and deliver high quality data architecture to support business intelligence needs of business analysts, data scientists, and external customers
- Monitor and maintain existing client facing reporting pipeline
- Design and implement a new reporting workflow with self-service support for customers
- Support day to day data needs of various internal and external stakeholders
- 2-4 years working as a Data or Software Engineer using Python and Spark
- Knowledge with big data ecosystems and cloud based technology (AWS experience preferred)
- Experience with infrastructure as code such as Terraform
- Proficient understanding of data infrastructure and backend systems
- Demonstrated experience developing ETL pipelines and solutions
- Strong communicator with ability to effectively convey complex topics to audiences of varying technical expertise
- You bring a culture of respect for customers based on integrity, collaboration, and partnership with your peers in the company
- You value being in a team-oriented culture
- You pride yourself in coming up with creative solutions
- You are goal-oriented, a quick learner, a self-starter, and a problem solver
Your application has been successfully submitted.
AI Driven Final Mile Logistics