We consider a general class of nonlinear constrained optimization problems, where derivatives of the objective function and constraints are unavailable. This property of problems can often impede the performance of optimization algorithms. Most algorithms usually determine a quasi-Newton direction and then use line search techniques. | Turk J Math (2017) 41: 808 – 824 ¨ ITAK ˙ c TUB ⃝ Turkish Journal of Mathematics doi: Research Article Penalty-free method for nonsmooth constrained optimization via radial basis functions Fardin RAHMANPOUR1,∗, Mohammad Mehdi HOSSEINI1,2 , Farid Mohammad MAALEK GHAINI1 1 Department of Mathematics, Yazd University, Yazd, Iran 2 Department of Applied Mathematics, Faculty of Mathematics and Computer, Shahid Bahonar University of Kerman, Kerman, Iran Received: • Accepted/Published Online: • Final Version: Abstract: We consider a general class of nonlinear constrained optimization problems, where derivatives of the objective function and constraints are unavailable. This property of problems can often impede the performance of optimization algorithms. Most algorithms usually determine a quasi-Newton direction and then use line search techniques. We propose a smoothing algorithm without the need to use a penalty function. A new algorithm is developed to modify the trust region and to handle the constraints based on radial basis functions (RBFs). The value of the objective function is reduced according to the relation of the predicted reduction of constraint violation achieved by the trial step. At each iteration, the constraints are approximated by a quadratic model obtained by RBFs. The aim of the present work is to keep the good position for the interpolation points in order to obtain a proper approximation in a small trust region. The numerical results are presented for some standard test problems. Key words: Exact penalty function, derivative-free method, trust-region method, nonsmooth optimization, radial basis functions, constrained optimization, nonlinear programming 1. Introduction Consider the nonlinear optimization problem with general nonlinear constraints: min f (x) x . gp (x) ⩾ 0 ht (x) = 0 p ∈ I1 = {1, 2, . . . , P } t ∈ I2 = {1, 2, . . . , T .