CRAYS aims to create a superior living environment that enhances the lives of our residents and communities by developing, acquiring, owning, and managing multifamily apartment buildings and the services and technology inside those buildings. Fulfilling our mission will require an exceptional group of people whose collective output is greater than the sum of its individual parts. Our team members are energized by the opportunity to impact our residents’ lives in meaningful ways. They are bold and creatively ambitious, driven by relentlessly high standards, act with a sense of urgency and accountability, and always, above all, operate with integrity, loyalty, and trust. We’re looking for a Data Engineer to join our team. This role will be responsible for building the systems and infrastructure for CRAYS to handle data at scale. The Data Engineer is a key member of the tech team, reporting to our BI Product Manager. As a Data Engineer, you will be designing and developing large-scale data systems (e.g., databases, data warehouses, big data systems), platforms, and infrastructure for analytics and business applications. You’re excited to solve data shalleges across both digital and physical products as well as across multiple business verticals.
Your tasks:
- Build and support our modern data stack (AWS, Snowflake, etc.)
- Architect, build, test, document, and launch highly scalable and reliable data pipelines for business intelligence analytics across the business
- Develop source of truth datasets and tools that encourage data-driven decisions and allow our teams to access and prepare data sets and reports easily and reliably
- Partner with stakeholders to translate complex business or technical problems into end-to-end data tools and solutions (e.g., pipelines, models, tables)
- Evaluate alternatives and make decisions on our data infrastructure
Your Skills:
- A minimum of 5 years experience in a data engineering role
- High proficiency in the ‘modern data stack’ (Snowflake / Fivetran / dbt / Sigma for example), SQL, Python, and AWS
- Experience designing and maintaining tools that support ETL pipelines and downstream business use cases of data
- Ability to collect, interpret, and synthesize inputs from various parts of the business into data model requirements
- Experience configuring databases and data warehouses to have optimal performance and reliability
- Deep understanding of the first and second order effects of reporting — you know the power of presenting the right data to the right people at the right time
- Inherent curiosity and analytical follow-through — you can’t help but ask “why?” and love using data and logic to explore potential solutions
- Overall understanding of data security and privacy best practices
- Highly collaborative and able to communicate effectively, both verbally and in writingA team player who can easily adapt in a rapidly changing environment
What we offer:
- A minimum of 5 years experience in a data engineering role
- High proficiency in the ‘modern data stack’ (Snowflake / Fivetran / dbt / Sigma for example), SQL, Python, and AWS
- Experience designing and maintaining tools that support ETL pipelines and downstream business use cases of data
- Ability to collect, interpret, and synthesize inputs from various parts of the business into data model requirements
- Experience configuring databases and data warehouses to have optimal performance and reliability
- Deep understanding of the first and second order effects of reporting — you know the power of presenting the right data to the right people at the right time
- Inherent curiosity and analytical follow-through — you can’t help but ask “why?” and love using data and logic to explore potential solutions
- Overall understanding of data security and privacy best practices
- Highly collaborative and able to communicate effectively, both verbally and in writingA team player who can easily adapt in a rapidly changing environment