Về phương pháp gradient descent giải bài toán tối ưu trong học máy

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Thị Thanh Xuân Bùi, Thị Nhung Dương, Thị Thanh Hà

Ngôn ngữ: vie

Ký hiệu phân loại: 006.31 Machine learning

Thông tin xuất bản: Khoa học và công nghệ (Đại học Thái Nguyên, 2016

Mô tả vật lý: 95-99

Bộ sưu tập: Metadata

ID: 654542

The gradient descent method was originally consider to be direct method for linear equations, but its favorable properties as an iterative method was soon realized, and it was later generalized to more general optimization problems. It provides a very effective way to optimize large, deterministic systems by gradient descent. Stochastic gradient descent (SGD) is a gradient descent optimization method for minimizing an objective function that is written as a sum of differentiable functions. The convergence of stochastic gradient descent has been analyzed using the theories of convex minimization and of stochastic approximation. Briefly, when the learning rates decrease with an appropriate rate, and subject to relatively mild assumptions, stochastic gradient descent converges almost surely to a global minimum when the objective function is convex or pseudo-convex, and otherwise converges almost surely to a local minimum. Stochastic gradient descent is a popular algorithm for training a wide range of models in machine learning, including (linear) support vector machines, logistic regression and graphical models.
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 71010608 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH