I haven't been able to manage enough time to code lately. It has got to do with me starting a job - out of a necessity of - I don't know what. But more on that on my other blog (that I hope I do write). Anyway, I thought I should put it here for my own sense of continuity rather than for the (dys)functionality slash (un)greatness of this code.
This is a problem that has to do with predicting the evolution of riverbed elevations in sediment carrying rivers. Yeah, also part of the (second semester of) M.Sc. Water Resources Engineering program's Water Induced Hazards::Riverbed Variation(RBV) module.
The problem of scraping together enough time to invest in writing a functioning program aside, this physical process inherently seems to me so very intractable to put into a working model, even now, when I believe I have a somewhat acceptable result it spit out a few minutes ago which was when I decided to do this write-up. What the program does is solve three PDEs describing the phenomenon using the much acclaimed MacCormack explicit finite differencing scheme.
When the equations were first presented in class, I don't believe there was much explanation regarding what each term meant in a physical sense, and more importantly, how one interacted with others. The presentation went something like this: write down the PDEs; discretize them to obtain equations for the next time level's riverbed elevation, water depth and specific discharge; write down the CFL condition; try and work out a numerical problem. So many questions were left unanswered: What should the boundary conditions be? Are they important? How sensitive is the output to boundary conditions? What does the sediment transport equation relating specific sediment discharge(qs) with specific discharge(q) mean? What does it mean to solve for specific discharge? Does specific discharge itself change as a result of riverbed variation? What output am I expecting? How should the computation procedure move in the finite difference grid? And as I put pen to paper, so to speak, there were so many possibilities to try - each requiring a great deal of effort - while most likely being wrong. One would easily be forgiven for giving up on even trying. Anyway, cut long story short, I tried to get it to work - a lot - and failed - a lot. Until I stumbled upon
this article that referenced
Alam M (1998) Application of MacCormack scheme to the study of aggradation degradation in alluvial channels M.Sc. Engineering thesis work, Department of Water Resources Engineering, BUET, Dhaka.
available
here. The "Full thesis" was freely available for download at the second link. The document proved extremely helpful and answered most of my questions. It also detailed a set of controlled lab experiments on RBV and gave me a sense of what I should be expecting from my program.
So, here is what the program does:
1. Takes in all the relevant input parameters.
2. Populates the first element of an array of "profile"objects. The first object in this array has the initial condition values for h, q and z along with x and t. x is an array of downstream nodal points. h, q and z are arrays of water depths, specific discharges and bed elevations at their respective x positions. t is the time level of the profile.
3. Runs a loop from 0 to the stipulated maximum time of simulation. For each iteration of this loop, there's another loop for computing the next time level's h, q and z values using the MacCormack explicit formulae at all downstream nodes.
4. The sediment discharge at the first (zero) node is held at a constant value that is a user-specified multiple of the sediment carrying capacity(?) of the initial flow [i.e. qs=a(q/h)^b] for all time levels. So is the water discharge.
5. Displays the final profiles in a graph as an animation.
While in algorithm, the process seems pretty straightforward, there are subtleties involved to say the least in the actual implementation in code. The MacCormack scheme needs values of the unknown parameters one spatial node in excess to the up and downstream while time-marching from any node. This necessitates calculations of parameters in as many nodes downstream of the actual downstream boundary as the number of time levels desired. For nodes requiring info from upstream of the upstream boundary, the respective initial state values are assigned i.e. no fictitious nodes black magic is performed.
Also, keeping track of the profiles at different time levels and using the right ones in the scheme without screwing up in the slightest is easier said than done.
There is also the coordination between the main and the WebWorker thread and figuring out a good dynamic between the two.
Not all input parameters output a good result. In fact, most inputs lead to an explosion of the predicted parameters at some point in time. I've tried decreasing delT thinking it may be a manifestation of the scheme's instability because the CFL condition may have been violated. But even when there's a more than comfortable buffer between the used delT and the threshold between that predicted by the Courant condition, I've seen irrational predictions owing to negative q's, tiny q's and what not. While the aforementioned paper recommends using a new delT after each time march depending on the new q and h values using the CFL criterion, I'm skeptical it would offer any significant improvements.
Anyway, I'm done with
this.
EDIT:
This is after I've realized what GitHub Pages is.
Live demo
here
Corresponding repo
here
Future updates, if any, will be made to the repo, not the Gist
|
I don't know why the first time march reduces the bed level to below the initial value here. |