CFD models of data centers often use two equation turbulence models such as the k-ε model. These models are based on closure coefficients or turbulence model constants determined from a combination of scaling/dimensional analysis and experimental measurements of flows in simple configurations. The simple configurations used to derive the turbulence model constants are often two dimensional and do not have many of the complex flow characteristics found in engineering flows. Such models perform poorly, especially in flows with large pressure gradients, swirl and strong three dimensionality, as in the case of data centers. This study attempts to use machine learning algorithms to optimize the model constants of the k-ε turbulence model for a data center by comparing simulated data with experimentally measured temperature values. For a given set of turbulence constants, we determine the Root Mean Square 'error' in the model, defined as the difference between experimentally measured temperature from a data center test cell and CFD calculations using the k-ε model. An artificial neural network (ANN) based method for parameter identification is then used to find the optimal values for turbulence constants such that the error is minimized. The optimum turbulence model constants obtained by our study results in lowering the RMS error by 25% and absolute average error by 35% compared to the error obtained by using standard k-ε model constants.