Commit f36eebf2 authored by Jakub Klinkovský's avatar Jakub Klinkovský
Browse files

Update LICENSE and README files

parent ed685692
Loading
Loading
Loading
Loading
+1 −0
Original line number Original line Diff line number Diff line
The MIT License (MIT)
The MIT License (MIT)


Copyright (c) 2019 Fabian Meyer
Copyright (c) 2019 Fabian Meyer
Copyright (c) 2022 Jakub Klinkovský


Permission is hereby granted, free of charge, to any person obtaining a copy
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
of this software and associated documentation files (the "Software"), to deal
+14 −26
Original line number Original line Diff line number Diff line
# gradient-descent-cpp
# tnl-gradient-descent


![Cpp11](https://img.shields.io/badge/C%2B%2B-11-blue.svg)
tnl-gradient-descent is a header-only C++ library for gradient descent
![License](https://img.shields.io/packagist/l/doctrine/orm.svg)
optimization using the Template Numerical Library.
![Travis Status](https://travis-ci.org/Rookfighter/gradient-descent-cpp.svg?branch=master)
![Appveyor Status](https://ci.appveyor.com/api/projects/status/66uh2rua4sijj4y9?svg=true)

gradient-descent-cpp is a header-only C++ library for gradient descent
optimization using the ```Eigen3``` library.


## Install
## Install


@@ -21,20 +16,11 @@ cmake ..
make install
make install
```
```


The library requires ```Eigen3``` to be installed on your system.
In Debian based systems you can simply type

```bash
apt-get install libeigen3-dev
```

Make sure ```Eigen3``` can be found by your build system.

You can use the CMake Find module in ```cmake/``` to find the installed header.
You can use the CMake Find module in ```cmake/``` to find the installed header.


## Usage
## Usage


There are three steps to use gradient-descent-cpp:
There are three steps to use tnl-gradient-descent:


* Implement your objective function as functor
* Implement your objective function as functor
* Instantiate the gradient descent optimizer
* Instantiate the gradient descent optimizer
@@ -51,9 +37,10 @@ struct Ackley
    Ackley()
    Ackley()
    { }
    { }


    double operator()(const Eigen::VectorXd &xval, Eigen::VectorXd &) const
    template <typename Vector>
    double operator()(const Vector &xval, Vector &) const
    {
    {
        assert(xval.size() == 2);
        assert(xval.getSize() == 2);
        double x = xval(0);
        double x = xval(0);
        double y = xval(1);
        double y = xval(1);
        // Calculate ackley function, but no gradient. Let gradien be estimated
        // Calculate ackley function, but no gradient. Let gradien be estimated
@@ -66,6 +53,9 @@ struct Ackley


int main()
int main()
{
{
    // Define the vector type
    using Vector = TNL::Containers::Vector<double, TNL::Devices::Host, gdc::Index>;

    // Create optimizer object with Ackley functor as objective.
    // Create optimizer object with Ackley functor as objective.
    //
    //
    // You can specify a StepSize functor as template parameter.
    // You can specify a StepSize functor as template parameter.
@@ -75,8 +65,7 @@ int main()
    // You can additionally specify a FiniteDifferences functor as template
    // You can additionally specify a FiniteDifferences functor as template
    // parameter. There are Forward-, Backward- and CentralDifferences
    // parameter. There are Forward-, Backward- and CentralDifferences
    // available. (Default is CentralDifferences)
    // available. (Default is CentralDifferences)
    gdc::GradientDescent<double, Ackley,
    gdc::GradientDescent<double, Ackley, gdc::WolfeBacktracking<double>> optimizer;
        gdc::WolfeBacktracking<double>> optimizer;


    // Set number of iterations as stop criterion.
    // Set number of iterations as stop criterion.
    // Set it to 0 or negative for infinite iterations (default is 0).
    // Set it to 0 or negative for infinite iterations (default is 0).
@@ -101,8 +90,7 @@ int main()
    optimizer.setVerbosity(4);
    optimizer.setVerbosity(4);


    // Set initial guess.
    // Set initial guess.
    Eigen::VectorXd initialGuess(2);
    Vector initialGuess = {-2.7, 2.2};
    initialGuess << -2.7, 2.2;


    // Start the optimization
    // Start the optimization
    auto result = optimizer.minimize(initialGuess);
    auto result = optimizer.minimize(initialGuess);
@@ -114,7 +102,7 @@ int main()
    std::cout << "Final fval: " << result.fval << std::endl;
    std::cout << "Final fval: " << result.fval << std::endl;


    // do something with final x-value
    // do something with final x-value
    std::cout << "Final xval: " << result.xval.transpose() << std::endl;
    std::cout << "Final xval: " << result.xval << std::endl;


    return 0;
    return 0;
}
}