Senior Data Engineer-AI
We are Relativity. A market-leading, global tech company that equips legal and compliance professionals with a powerful platform to organize data, discover the truth, and act on it. The US Department of Justice, 199 of the Am Law 200, and more than 329,000 enabled users trust Relativity during litigation, internal investigations, and compliance projects.
Our SaaS product, RelativityOne, has become the fastest-growing product in the company's history and we have consistently been named a great workplace. As we grow, we continue to seek individuals that will bring their whole, authentic self to our team.
We believe that great talent is not bound by geography and that what you do matters more than where you do it. Relativity has assumed a hybrid work strategy, allowing choice and flexibility for employees to work either from home, a physical Relativity office location (once safe to do so), or a combination of the two, within certain logistical boundaries. Submit your application to learn more from our recruiters or contact us for more details.
About AI at Relativity
In the past two years, billions of documents have already benefited from the insights of Relativity AI – and we are just getting started on our journey to use AI to improve each user experience, product, matter, and investigation at Relativity. We are focused on helping our users discover the truth more quickly, and act on data with confidence.
· We are focused on algorithm excellence, to provide the most robust and trusted experience possible.
· We are creating a world class toolset to solve complex challenges quickly and iteratively.
· AI will be leveraged everywhere, in all stages of the discovery process to better manage cases and to optimize product operations.
As a team, we believe in exploration, experimentation, and bringing your curiosity to work every day. We know that you can’t innovate without experimentation — and a little failure happens on the path to invention. We use the latest and greatest to ensure we are the best. We strive to experiment, ship, and learn every day.
About Data Engineering for AI
Great insights can’t happen without great data, and the best insights come from massive data. Our data infrastructure and engineering ensure that the breadth of Relativity data is available for insights, confidential data is kept confidential, and data is protected at all times. To continue to unlock more insights, we are investing heavily in data pipeline and data lake technology.
If you are experienced in big data technologies such as Hadoop/HDFS, Kafka, data pipelines, blob storage, distributed file systems, big data storage formats, Python, Spark, JVM/Scala, Snowflake, and are looking for at-scale challenge with a ton of new innovation and experimentation ahead, you will find yourself at home on the AI data engineering team within Relativity. The team is small but growing fast; you’ll be on the front lines of implementation of our data pipelines. We seek collaborative builders who want to move fast and love a challenge.
About the Senior Data Engineering Role for AI
You’ll work both within our team and across the company to leverage our data at scale. You’ll be building out company-wide big data storage, pipelines, streaming, micro-batch, and batch processing solutions. You’ll be partnering directly with our data scientists to create best in class tooling for managing our fleet of models. You’ll inspire and engage other software engineers to learn about and build big data solutions. You’ll work with our data scientists and other data engineers to dream bigger about what’s possible. Innovations that you help create and deliver will be running on Relativity’s global cloud footprint, powering billions of insights.
- Participate in key design decisions related to our big data and data science infrastructure and toolset.
- Advise and consult the business and engineering on best practices for data collection, data management, data quality, and the use of data at scale.
- Collaborate with our data scientists, product managers, and engineering teams to understand data requirements and to build workable data solutions.
- Identify and architect multiple data solutions for a given set of business requirements. Consider alternate solutions and understand trade-offs between those solutions.
- Implement scalable data pipelines using streaming or batch processing, using best practices for ETL/ELT and big data tools.
- Ship working solutions in an iterative fashion using a Continuous Deployment strategy.
- Learn about and keep up with the latest trends and technologies in Data Science, Machine Learning, Artificial Intelligence, statistics, and applied mathematics.
- Educate and mentor other Data Engineers on our tech stack and data best practices.
- Experience designing APIs, service-oriented architectures, cloud based distributed systems, and big data systems.
- Track record of delivering complex technical solutions.
- Excellent communication skills.
- Experience creating batch and stream processing leveraging technologies like Apache Spark, Apache Flink, Kafka, Data Lake, data pipelines, blob storage, distributed file systems, big data storage formats, SQL, no SQL, Python, Spark, JVM/Scala, and cloud-based data warehouses.
- Experience developing ETL/ELT and data pipelines using a variety of tools.
- Experience creating processes and systems to manage data quality.
- Fluent in multiple languages, preferably Python and a JVM language.
- Experience with AWS, Google Cloud, or Azure data infrastructure and tooling.
- Experience collaborating with data science teams with conceptual knowledge on data science project lifecycles and techniques.
- Experience designing, building, and managing either data lakes, data marts, or data warehouses.
- Experience training and deploying machine learning models.
- Experience with Azure cloud environment and Azure’s data management and data science toolset.
- Fluent in C# and .NET technologies.