Commit f36eebf2 authored by Jakub Klinkovský's avatar Jakub Klinkovský
Browse files

Update LICENSE and README files

parent ed685692
Loading
Loading
Loading
Loading
+1 −0
Original line number Diff line number Diff line
The MIT License (MIT)

Copyright (c) 2019 Fabian Meyer
Copyright (c) 2022 Jakub Klinkovský

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
+14 −26
Original line number Diff line number Diff line
# gradient-descent-cpp
# tnl-gradient-descent

![Cpp11](https://img.shields.io/badge/C%2B%2B-11-blue.svg)
![License](https://img.shields.io/packagist/l/doctrine/orm.svg)
![Travis Status](https://travis-ci.org/Rookfighter/gradient-descent-cpp.svg?branch=master)
![Appveyor Status](https://ci.appveyor.com/api/projects/status/66uh2rua4sijj4y9?svg=true)

gradient-descent-cpp is a header-only C++ library for gradient descent
optimization using the ```Eigen3``` library.
tnl-gradient-descent is a header-only C++ library for gradient descent
optimization using the Template Numerical Library.

## Install

@@ -21,20 +16,11 @@ cmake ..
make install
```

The library requires ```Eigen3``` to be installed on your system.
In Debian based systems you can simply type

```bash
apt-get install libeigen3-dev
```

Make sure ```Eigen3``` can be found by your build system.

You can use the CMake Find module in ```cmake/``` to find the installed header.

## Usage

There are three steps to use gradient-descent-cpp:
There are three steps to use tnl-gradient-descent:

* Implement your objective function as functor
* Instantiate the gradient descent optimizer
@@ -51,9 +37,10 @@ struct Ackley
    Ackley()
    { }

    double operator()(const Eigen::VectorXd &xval, Eigen::VectorXd &) const
    template <typename Vector>
    double operator()(const Vector &xval, Vector &) const
    {
        assert(xval.size() == 2);
        assert(xval.getSize() == 2);
        double x = xval(0);
        double y = xval(1);
        // Calculate ackley function, but no gradient. Let gradien be estimated
@@ -66,6 +53,9 @@ struct Ackley

int main()
{
    // Define the vector type
    using Vector = TNL::Containers::Vector<double, TNL::Devices::Host, gdc::Index>;

    // Create optimizer object with Ackley functor as objective.
    //
    // You can specify a StepSize functor as template parameter.
@@ -75,8 +65,7 @@ int main()
    // You can additionally specify a FiniteDifferences functor as template
    // parameter. There are Forward-, Backward- and CentralDifferences
    // available. (Default is CentralDifferences)
    gdc::GradientDescent<double, Ackley,
        gdc::WolfeBacktracking<double>> optimizer;
    gdc::GradientDescent<double, Ackley, gdc::WolfeBacktracking<double>> optimizer;

    // Set number of iterations as stop criterion.
    // Set it to 0 or negative for infinite iterations (default is 0).
@@ -101,8 +90,7 @@ int main()
    optimizer.setVerbosity(4);

    // Set initial guess.
    Eigen::VectorXd initialGuess(2);
    initialGuess << -2.7, 2.2;
    Vector initialGuess = {-2.7, 2.2};

    // Start the optimization
    auto result = optimizer.minimize(initialGuess);
@@ -114,7 +102,7 @@ int main()
    std::cout << "Final fval: " << result.fval << std::endl;

    // do something with final x-value
    std::cout << "Final xval: " << result.xval.transpose() << std::endl;
    std::cout << "Final xval: " << result.xval << std::endl;

    return 0;
}