Huawei is a leading telecom solutions provider. Through continuous customer-centric innovation, Huawei has established end-to-end advantages in Telecom Network Infrastructure, Application & Software, Professional Services and Devices. With comprehensive strengths in wireline, wireless and IP technologies, Huawei has gained a leading position in the All-IP convergence age. Its products and solutions have been deployed in over 100 countries and have served 45 of the world's top 50 telecom operators, as well as one third of the world's population.

Internship in Nonconvex optimization

In the context of context of communication networks, neural networks are primarily involved in automated pattern detection/extraction and recognition/identification tasks in online signal processing (packet flows for instance) but also sub-graph analysis (network topology and path properties). Neural models aim at augmenting -or even replacing- current local routing and forwarding models and processes. For this purpose, a generalized model must be extracted and selected from a finite set of input-output samples drawn from an unknown probability distribution that should be descriptive enough for unseen/new input data. However, due to the nonconvexity of the set of functions that can be approximated by neural networks and the nonconvex property of the objective function to be minimized, their training performs by solving nonconvex optimization problems over nonconvex sets. Moreover, due to various operational or performance requirements, training further involves the solving of constrained nonconvex optimization problems.

On the other hand, automatic differentiation (AD) onto which neural training algorithms relies, remains largely unexploited in optimization solvers. Commonly, the solving of bound-constrained nonlinear optimization problems with, e.g., the Newton method or the Augmented Lagrangian Method (ALM), require the evaluation of the objective function, its gradient and the (sparsity pattern of the) Hessian matrix. Additionally, constrained optimization problems also involve providing the sparsity pattern and the Jacobian matrix of the constraints. The time and resource required to obtain this information and verify their correctness can be relatively large even for simple problems. Nowadays, optimization problem-solving environments provide modeling language and state-of-the-art optimization solvers together with packages that are capable to compute first-order information, e.g., derivatives, gradients. However, AD remains largely unexploited for generating these quantities, requiring in turn to run the entire framework and its packages altogether to solve nonlinear optimization problems. For constrained neural networks training, it would also be beneficial to solve the corresponding nonconvex constrained optimization problem (including inequality constraints) by means of unified AD tools. Indeed, combining AD tools that automatically generate the required first- and second-order quantities together with nonlinear optimization methods such as ALM yields a promising method.

Objective: investigate nonconvex optimization solving methods that can rapidly converge with few function and derivative evaluations as well as provide and improve second-order information with the same efficiency and reliability as available for first-order information.

Task(s):

  • Formulate, develop and numerically evaluate enhancements/extensions of computational methods/algorithms including ALM for solving constrained nonconvex optimization problems.
  • To address the computational performance objectives, the candidate will also actively participate to the design (extension) and evaluation of (existing) automatic differentiation tools for solving such optimization problems.
  • Integration of these tools into a unified AD framework will be realized in cooperation with computational/numerical method experts. These tasks will be realized under the supervision of a senior (postdoc-level) researcher.

Duration:

  • Short duration (from 3 to 6 months): suited for MSc curriculum course
  • Long duration (up to 12 months): suited for MSc thesis internship
  • Note well: unpaid student internship (no working contract), only direct costs are reimbursed up to 660 Euros per month

Candidate profile:

  • MSc (or last year of MSc curriculum) in applied mathematics, mathematical engineering, theoretical computer science, or computer science engineering.
  • Strong nonlinear modeling and mathematical programming skills.
  • Very good knowledge of techniques such as first-/second-order iterative methods, inexact methods and Krylov subspace methods ((B)CG, Lanczos, GMRES, MINRES, Arnoldi, etc.) for nonconvex optimization problems.
  • Experience in programming with nonlinear optimization libraries, e.g., LANCELOT/ CUTE, MINOS, TRON, NLopt, etc.
  • Excellent written, verbal and interpersonal communication skills.
  • Knowledge of functional programming language (LISP, Julia, etc.) is considered as a plus.

Application requirements

  • Certified copy of the MSc diploma/certificate shall be included in annex of the CV.
  • The CV shall indicate the detailed coordinates of the current or eventually the last academic institution and department of the candidate.
  • The CV shall include a detailed list of publications/achievements in relation to the job description.
  • Note well: the candidate must follow or have obtained his/her MSc degree from an academic institution of one of the EU countries.

Starting date: Feb.1st, 2021 (earliest) – Mar.31st, 2021 (latest).

Your application will be evaluated by the HR department of Huawei R&D Sites in Belgium and the Netherlands itself. For any additional feedback regarding your application, we kindly refer you to the Huawei R&D Sites in Belgium and the Netherlands HR department.

Ooooopppsss....

The company you're looking for no longer works with CVWarehouse.
But we have lots of other exciting companies available for you!

Ooooopppsss....

The job you're looking for is no longer online.
But we have lots of other exciting vacancies available for you!