Tuesday, March 26, 2019

Nepali EPapers

A simple python 3.7 script to download common Nepali newspapers as jpegs.

Monday, March 18, 2019

Traveling salesman problem with Genetic Algorithm

This is a try at an approximate solution to the famous Traveling Salesman problem using the Genetic Algorithm heuristic. There are countless articles on the topic out there. I just wanted to make my own 'general' GA module that could be used for any suitable problem. So, here is the implementation. As in the last post, the videos by The Coding Train on the subject were really helpful.

Some observations I made while playing around with the model:
- Size of the population was directly proportional to the amount of time it took to converge to a solution up to a point. After that, the computations for the increased population probably swamps any gains to be made by the large diversity of the large population.
- Mutations are pivotal. Setting mutation rate to 0 will make the algorithm settle pretty early on on a local optimum which often isn't a good enough solution.
- There's probably a lot that could be improved on it. Maybe a large initial mutation rate that decays over time could overcome local optima in favor of global optima. Maybe improved fitness function. Improved cross-over, maybe a stronger mutation.

Anyway, here is a demo gif:

Sunday, March 17, 2019

Weighted random picker

I've been dabbling in Genetic Algorithms as of late. Great videos here on the subject.
In implementing a GA using JavaScript, I came across the problem of selecting into a mating pool individuals from a population depending on their fitness values. Talking in terms of code, it basically boils down to picking from a list of objects conforming to a custom distribution defined typically by certain 'weights' assigned to each of them.
I first tried the most obvious and the most naive solution: to generate a new list consisting of the original list's objects with their population corresponding to their respective weights. So, for example, if A,B and C are the objects with the weight distribution of 50%, 30% and 20%, I created a new list of, say a hundred total objects with 50 A's, 30 B's and 20 C's. Then, if I choose randomly from this new list, I should get roughly the desired distribution of our original objects. But such a method quickly led me to a whole host of problems: for starters, for large original populations, the individual probabilities/normalized weights could be arbitrarily small - this meant, the new pool's total population needed to be correspondingly large enough to reflect all the small differences between the original population's members' weights. Ignoring this fact led to diminishing diversity in subsequent generations and owing to that, a complete collapse of the mating pool after a number of iterations.
I got a couple of solutions to the problem upon researching about the topic from the internet but most of them were, while fully functional, unintuitive. There was a lot of head-scratching and frustration-filled deliberation. I knew that for a weighted/biased true or false logic, the fact that a random percentage needs to be less than a given percentage or occurrence rate can be exploited as a good decision criterion. For example, if the desired occurrence rate is 1%, choose a random percentage and if it happens to be less than 1%, take the outcome as true. It is obvious that the distribution is going to be heavily skewed towards false. It works and checks out with the desired rate of 1%. However, when it comes to a bunch of different objects each with their probabilities/relative weights, the logic can not simply forgo the interdependence that inherently exists between the such objects. Using the same idea willy-nilly on each member just doesn't work.
Then, I got an idea of my own. Somehow, an object's probability score had to map to its selection and that process could not be deterministic. Just because a member has a score of 80% shouldn't always mean a yes, and shouldn't always mean a no. It should be randomly determined but even in randomness, the choice should be a yes more often than not. What if I multiplied each member's probability with a random percentage? Some members would get a large random percentage and some, small. Since the random numbers themselves don't have any particular bias, the resulting 'scaled' probabilities will be random with a slight skew defined by member-specific weights. Because of the imposed biases in the form of weights, more often than not, members with larger weights will on average, score higher than others. This won't always be true but on average, the frequency that a member will have the highest final score will be proportional to its assigned weight - the greater its weight, the easier it is going to be the one with the highest score. Members with lower scores will occasionally be the top scorers - but rarely. And when we pick the highest scoring members enough number of times, the resulting distribution will reflect this phenomenon.
Phew! Enough rambling.
Here is the code.
For a piece of code that small, this is such a big explanation isn't it?

Tuesday, March 12, 2019

Dijkstra's algorithm for most optimal path

A friend of mine who studies Operations Research in Transportation Engineering suggested that I look into the problem of path optimization subject to one or more constraints and provided me with a paper he wrote some time ago. I perused the documents and was fascinated by the topic. Following some internet research, I thought that I could start by trying to implement a shortest path algorithm known as Dijkstra's algorithm.
This is a python implementation of Dijkstra's optimal path algorithm using a adjacency matrix from a csv file. The direction of travel is row to column. Inaccessible path is represented by a distance of 1000. Given a starting node, the program looks for its neighbor nodes and for each, calculates and stores the distance from the starting node and its index. Similarly, for each of the neighbor nodes, their neighboring nodes are sought in turn and the shortest cumulative distance to each from all possible routes to them along with the corresponding parent node index is stored. This process is continued until all the nodes have been visited. In the end, each node has two important pieces of information attached: the parent node corresponding to the optimal path and the optimal total distance to the node from the origin/starting node.
This way of going about the shortest route problem is based on this explanation which I found very helpful.
Optimal path from node 2 to node 15
Manual Dijkstra iteration along with content from the aforementioned paper on the topic

These Google Maps pictures with the assignment of nodes from 1 to 15 have been directly excerpted from my friend's paper.
Code @ GitHub