JD Edwards Developer
Job Description – JD Edwards (E1) Developer We are seeking a skilled Senior JD Edwards (E1) Developer to play a pivotal role in our ERP organization. The ideal candidate will contribute to process redesign and project execution, ensuring high-quality delivery in line with governance standards. The JD Edwards Developer will manage the development lifecycle, focusing on integrations and enhancements for a new global MES Solution project, as well as having the end-to-end responsibility for ERP Development delivery. Main Responsibilities: Collaborate with internal leads to develop technical solutions. Manage Requests for Change (RFCs) and propose minor enhancements to the E1 platform. Participate in continuous process improvements. Ensure quality assurance throughout development efforts. Drive independent task progress. Record daily time spent on RFC tasks. Key Requirements: Strong experience with the E1 development lifecycle. Solid understanding of E1 applications. Proven expertise in FDA, RDA, NER, BSSV, and table/view design. Experience with at least 2 JD Edwards modules (Finance, Distribution, Manufacturing, Sales). Proficiency in C programming and Orchestration Studio. Expertise in scripting languages such as Groovy. Ability to challenge and improve technical solutions. Fluent in English (written and spoken). Ability to work independently and collaboratively with business analysts. Proactive attitude towards requirements adaptation. Nice to Have: Experience with additional JD Edwards modules. Experience in global project implementations. Start date: As soon as possible Duration: 1½ to 2-2½ months. Workload: Fulltime Location: Remote
Andre jobs fra Emagine

Data Engineer
Introduction & Summary We are seeking qualified Data Engineers. Ideal candidates will possess at least five years of experience in DevOps environments, showcasing their proficiency in agile software development practices and extensive knowledge of Azure Databricks. Main Responsibilities Collaborate with the DevOps team and internal subject matter experts. Develop and deploy solutions on the Azure platform utilizing Infrastructure as Code. Implement and maintain components on the Azure Databricks platform. Follow Agile methodologies to ensure timely project delivery. Participate in performance diagnostics and condition monitoring system development. Align development efforts with user requirements and stakeholder feedback. Regularly report on project status and potential risks to leadership. Key Requirements Hands-on experience in developing Azure platforms using IaC tools such as BICEP and Terraform. Extensive expertise with the Azure Databricks platform including Unity Catalog, Spark, Python, and Databricks Asset Bundles. Advanced proficiency in Python for backend API development using frameworks such as FastAPI and Flask. Thorough knowledge of core Python libraries including NumPy and Pandas. At least five years of experience in the relevant domain. Nice to Have Experience in performance monitoring and tuning of data solutions. Knowledge of best practices in cloud architecture and design. Familiarity with machine learning principles and tools. Other Details This project is scheduled to start in Q2 2026 and will run into Q3 2026, with an overall duration of three to six months. While candidates are expected to work full-time on site in Kgs. Lyngby, there may be opportunities for remote work subject to agreement. The project will entail periods of reduced capacity, approximately 50%, depending on the factory's startup timeline.

Frontend Developer
Introduction & Summary We are seeking a Frontend Developer. The role demands a robust proficiency in designing and implementing user-friendly frontend applications, utilizing React.js and TypeScript. Main Responsibilities The primary focus of this role is to: Design and implement a high-performance frontend application for remote operation. Develop responsive, cross-browser interfaces using React.js, TypeScript, and advanced CSS. Integrate interactive charts and dashboards with Plotly. Facilitate seamless API integration for real-time data exchange. Emphasize scalable, component-driven solutions tailored for aftermarket services. Collaborate closely with other experts to enhance the operational capabilities of PtX assets. Key Requirements Strong experience with React and modern JavaScript (ES6+). Solid understanding of TypeScript. Experience with state management (e.g., Redux or similar). Deep knowledge of HTML, CSS, and responsive design principles. Demonstrated experience with Storybook or comparable tools. Proven ability to develop high-performance, responsive, and cross-browser applications. Excellent understanding of API integration. Hands-on experience in building interactive charts and dashboards using Plotly or similar. Nice to Have Familiarity with agile software development practices. Experience in DevOps-related environments. Other Details This project is set to commence in Q2 2026, with a duration expected to range from three to six months. Consultants will work full-time on site in Kgs. Lyngby, with opportunities for remote work. Prospective candidates should anticipate potential periods of reduced engagement (approximately 50%) due to uncertainties surrounding the factory start-up date.

Data Scientist
We are seeking an experienced Data Scientist with a Master’s degree in Software Engineering, Computer Science, Physics, Chemical Engineering, or a related discipline. The successful candidate will have a minimum of five years of experience and a proven track record in DevOps-oriented environments, utilizing agile software development practices. The role entails developing a Python library and example notebooks to facilitate domain experts in analyzing large timeseries datasets within a Databricks data warehouse. Main Responsibilities The Data Scientist will primarily focus on: Developing and enhancing a Python library for data analysis. Creating example notebooks for domain experts. Implementing automated batch calculations to enrich measurement data. Collaborating closely with the data science team and stakeholders. Ensuring alignment with user requirements throughout the development process. Key Requirements Demonstrated excellence in Python development. Proficiency in SQL. Experience with version control systems, particularly Git. Proficiency in using Jupyter notebooks. Experience working with timeseries data. Strong communication and collaboration skills. Nice to Have Experience working in Databricks. Experience in a cloud environment, preferably Azure. Experience in developing user-facing Python libraries. Familiarity with Plotly Dash. Experience with Apache Spark or PySpark. Knowledge of CI/CD practices for automated testing and deployment. Product-oriented mindset with a focus on user needs. Background or interest in chemistry, chemical engineering, or electrochemistry.

12 x Data Portfolio Analysts
About the job: Our client is on a journey to strengthen its enterprise data capabilities by driving data value, ensuring data is consumable, secure, private, accurate, available, and usable across the organisation. To accelerate this journey, we are establishing a department with four new domain-specific Data Portfolio Teams, each responsible for executing strategic data initiatives that enable business outcomes within the Market Data, Operations, Counterpart Risk, and Market Risk areas. The data portfolio teams will be supported by a data architect. The Data Portfolio Analyst is responsible for independently analyzing, identifying, and implementing data quality improvements and controls within a specific pre-defined data portfolio area, in collaboration with data domain owners, data stewards, and key business stakeholders.Main Responsibilities: Support domain teams in defining and managing Critical Data Elements (CDEs) and governance requirements within their assigned data portfolio areas. Assess the current state of data quality for critical data elements (CDEs), with help from data owners, data stewards, and data consumers. Ensure data quality rules are identified, risks assessed, and appropriate target thresholds defined. Identify optimal approaches for resolving data quality or consistency issues to achieve targets. Develop expertise in the Group's Data Quality tool to support rule building in a consistent manner, in your specific domain area of expertise. Monitor and track ongoing data fidelity (e.g., quality and consistency) levels and metrics that assess adherence to data governance policies. Ensure data cleansing activities are allocated for the correction of data quality flaws that cannot be addressed fully by automated means. Monitor delivery health and escalate blockers or issues in your specific domain area of expertise. Key Requirements: Strong understanding of how data enables business processes and how data quality impacts operational, regulatory, and customer outcomes. Good level of understanding with data governance and data quality risk management best practices, ideally experience within financial services or market data. Solid data analysis skills in SQL and Python; experience with Databricks is beneficial. Ability to navigate complex data models and understand cross-domain dependencies. Strong coordination and communication skills, with experience structuring and driving data quality initiatives at different management levels. Skilled in facilitation and conflict resolution, especially when balancing competing requirements across functions. Ability to drive behavioral change and support adoption of data governance practices and data-driven ways of working. Nice to Have: Experience working in a collaborative team environment. Familiarity with data visualization tools. Certification in data governance or data management. Start: 1st of May (requirement) Contract duration: 3 secured months, with the potential (probable) extension to 6 months total. Workload: Fulltime Location: Consultant shall be able to work onsite from Greater Copenhagen min. 4 days per week and 1 day remote.