Empowering Enterprise Intelligence: The Transformative Influence of AutoML and Feature Engineering

Main Article Content

Balaji Dhamodharan

Abstract

In today's data-driven landscape, enterprises are increasingly reliant on advanced analytics to gain actionable insights and drive informed decision-making. Automated Machine Learning (AutoML) and feature engineering have emerged as transformative tools, streamlining the process of model development and enhancing the predictive power of algorithms. This paper explores the synergistic influence of AutoML and feature engineering on enterprise intelligence, highlighting their ability to democratize data science, accelerate model deployment, and improve the efficiency and accuracy of predictive analytics. Through real-world examples and case studies, the paper demonstrates how AutoML and feature engineering empower organizations to extract valuable insights from complex data sources, optimize resource allocation, and gain a competitive edge in today's dynamic business environment.

Downloads

Download data is not yet available.

Article Details

How to Cite
Empowering Enterprise Intelligence: The Transformative Influence of AutoML and Feature Engineering (B. Dhamodharan , Trans.). (2023). International Journal of Creative Research In Computer Technology and Design, 5(5), 1-11. https://jrctd.in/index.php/IJRCTD/article/view/48
Section
Articles

How to Cite

Empowering Enterprise Intelligence: The Transformative Influence of AutoML and Feature Engineering (B. Dhamodharan , Trans.). (2023). International Journal of Creative Research In Computer Technology and Design, 5(5), 1-11. https://jrctd.in/index.php/IJRCTD/article/view/48

References

Feurer, M., Klein, A., Eggensperger, K., Springenberg, J. T., Blum, M., & Hutter, F. (2015). Efficient and robust automated machine learning. In Advances in neural information processing systems (pp. 2962-2970).

Ribeiro, M. T., Singh, S., & Guestrin, C. (2019). "Why Should I Trust You?" Explaining the Predictions of Any Classifier. arXiv preprint arXiv:1602.04938.

Lantz, E., Vukadinovic Greetham, D., Akerkar, R., & Duesing, N. (2020). Scalable Automated Machine Learning with H2O. In International Conference on Intelligent Data Engineering and Automated Learning (pp. 348-358). Springer, Cham.

Hutter, F., Kotthoff, L., & Vanschoren, J. (2019). Automated machine learning: Methods, systems, challenges. Springer.

Friedman, J. H. (2019). Data mining and statistics: what's the connection?. In Proceedings of the fifteenth ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 3-3).

Chen, T., Li, M., Li, Y., Lin, M., Wang, N., Wang, M., ... & Guo, Y. (2020). Feature-engine: A python library to automate feature engineering. Journal of Open Source Software, 5(47), 2035.

Agarwal, R., Doppa, J. R., & Fern, A. (2018). MACHIDA: A meta-learning based method for automated algorithm selection and hyperparameter tuning. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 32, No. 1).

Smith, L. N. (2017). Cyclical learning rates for training neural networks. In 2017 IEEE Winter Conference on Applications of Computer Vision (WACV) (pp. 464-472). IEEE.

Brownlee, J. (2017). Deep learning for time series forecasting: Predict the Future with MLPs, CNNs and LSTMs in Python. Machine Learning Mastery.

Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., ... & Vanderplas, J. (2011). Scikit-learn: Machine learning in Python. Journal of machine learning research, 12(Oct), 2825-2830.

Hinton, G., Srivastava, N., & Swersky, K. (2012). Lecture 6a Overview of mini-batch gradient descent. Coursera: Neural networks for machine learning, 4(2), 14.

Bengio, Y., Courville, A., & Vincent, P. (2013). Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8), 1798-1828.

Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.

Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1), 1929-1958.

LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. nature, 521(7553), 436-444.

Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780.

Chollet, F., & Allaire, J. J. (2018). Deep learning with R. Manning Publications Co..

Goodfellow, I., Bengio, Y., Courville, A., & Bengio, Y. (2016). Deep learning (Vol. 1). MIT press Cambridge.

Zeiler, M. D. (2012). ADADELTA: An adaptive learning rate method. arXiv preprint arXiv:1212.5701.

Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779-788).

Bishop, C. M. (2006). Pattern recognition and machine learning. springer.

Chollet, F. (2017). Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1251-1258).

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. In Advances in neural information processing systems (pp. 5998-6008).

Bojarski, M., Del Testa, D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., ... & Zhang, X. (2016). End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316.

Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780.

Most read articles by the same author(s)

<< < 1 2 3 4 5 > >>