Thursday, May 21, 2015

Random Curiosities (II)

Again a full reading list that I've compiled, the most interesting items are selected for the 'random curiosities' of this week. It seems to me that these lists will surely help me to keep track of my readings and interests over time and it will be definitely interesting to look back..


  • The first item on the list is a great documentary I've watched recently which is called 'The Emergence of Network Science' featuring Steven Strogatz. It is mainly focused on graphs and especially the famous 'six-degree of seperation' problem and gives the new emerging science of networks. I've met many new names from the film and one of the major ones is A. Barabasi and his wonderful new book 'Network Science'. It is published online and it seems like a great resource. I will definitely check it out before the 'Applied Graph Theory' course in Nesin Matematik Village which I'll be participating this summer.
  • A game-theoretical ecological article called 'The hawk–dove game in a sexually reproducing species explains a colourful polymorphism of an endangered bird' in the recent issue of Proceedings of the Royal Society B. It examines the famous 'hawk-dove game' behaviour in the Gouldian finches population and derives the conditions for the evolutionary beneficial polymorphism to be retained in the population. Link for the article.
  • A recent article from C. Veller and M. Nowak titled "Extended flowering intervals of bamboos evolved by discrete multiplication" which is about the mathematical pattern involving the flowering of bamboo plants (Link for the article). There is a nice overview of the paper called 'Bamboo Mathematicians' in Carl Zimmer's blog.
  • From New York Times, an article on mathematical population model of blue crabs in Chesapeake Bay which is based on a field data: Mathematicians and Blue Crabs
  • Another interesting article on the models of evolutionary mechanism from Yale which proposes “house of cards” model — which holds that mutations with large effects effectively reshuffle the genomic deck — explains evolutionary processes better than the theory that species undergo the accumulation of many mutations with small effects. Further read: In evolution, ‘house of cards’ model wins
  • Quanta Magazine has published an interesting article about genetically identical flies and their diverging individual behaviours studied through genetic and environmental variations. Details in the article: 'Animal Copies Reveal Roots of Individuality'
  • Nautilus is running a very interesting theme this month: 'Error' with a sub-title "How does the scientist negotiate the hall of mirrors and come out clutching the truth?..." Worth checking out..
  • An inspiring read from the great mathematician V. I. Arnold 'On Mathematics Education' from the archives of Dynamical Systems Magazine.
  • Book find of the weekA Mathematical Nature Walk by John A. Adam. Full of wonderful questions about various natural phenomena and inspiring models and answers for them. Definitely a gem. (Book link)

Thursday, May 14, 2015

StatMech (I) - Equiprobability Principle

In my statistical mechanics courses, everytime we had started from the very basic 'equiprobability principle' which states that
... in a state of thermal equilibrium, all the accessible microstates of the system are equally probable
It just looks like a very innocent and trivial sentence but no one states that this is the very foundation of all equilibrium statistical mechanics and I must assure you that it is far from trivial! I had a chance to tackle with this notion this term when I was working on a computational project for my graduate Statistical Mechanics class which involves the 'Monte Carlo Simulation of Hard Spheres' focusing on ergodicity of Boltzman Statistical Mechanics and Classical Mechanics. I will be trying to share my 'enlightment' with a couple of posts starting with this one.

First of all, we need to state that there is no complete axiomatic derivation of equilibrium statistical mechanics from the foundations of classical dynamics. We need additional inputs such as the one above in order to get it. Thus the above statement, the equiprobability principle, plays a significant role in statistical mechanics.

Consider an isolated, closed Hamiltonian system with no energy and particle exchange allowed with the surrounding, for example a gas in a closed container. We can characterize the system with the Hamiltonian function $H(q,p)$ if the involved dynamics has a time translational symmetry and $q$'s and $p$'s are the corresponding degrees of freedom (DOF) of the system. We can measure the value of $H$ with a precision, say, $\delta E$: \[H(q,p) = E + \delta E \tag{1}\]This specification of the energy of the systems can be thought as a constraint, thus once we specify $E$, we are constrained to move on an energy hyper surface (a shell in phase space) which is defined by $(1)$. In the absence of further information, there is little thing one can do except our experience with general classical systems with many DOF tells that this systems is likely to be chaotic; it is likely to have positive Lyapunov exponents and it wonders around such that energy surface is filled up (i.e ergodic - more on that in the next post).

The simple representation of a phase space of the system. The dimension of the whole space in this case is 24, thus collection of configuration and momentum axis are depicted with only one axis for representational purposes (Source).

Due to this energy constraint, it is clear that the whole space is not accessible to the system. Our assumption is, any one portion of this energy shell is as likely to be occupied as the other part of the shell. Actually, we can regard this statement as a maximal ignorance; since we know nothing about it, we say therefore that all outcomes are equally likely. Now we can specify our fundamental postulate of equilibrium statistical mechanics again as:
In a state of thermal equilibrium, all the accessible microstates of the system are equally probable.
This postulate is actually suffices to derive all equilibrium statistical mechanics, furthermore thermodynamics comes as a special case. Now let's go over the highlighted words in the definition one by one, starting with the thermal equilibrium.

Thermal equilibrium is defined as the case where time averages of macroscopic quantities are independent of time. In our gas in a container example, if we look for the velocity of a particular gas molecule in an instant of time (which we call a microscopic quantity) we don't expect it to be time-independent, since the gas molecule encounters collisions with other particles and the wall. But the average velocity of all the particles (which we call a macroscopic quantity) is zero for all time since the velocities to the right are compensated with the ones to the left and similarly for the other directions. Thus we say that the average velocity in thermal equilibrium is time-independent. Similarly we can say that average energy and pressure is also time-independent macroscopic quantities.

Averages are taken with respect to some probability distribution. The only way some macroscopic quantity becomes time-independent is that the probability distribution itself is also independent of time; because that way you are sure that all the moments of all the variables you take are all independent of time.

The next word to be defined is an accessible microstate. The microstate is defined by specifying the state of each constituents of the systems, i.e all the coordinates $(q,p)$ for each particle. The accessible microstates are the ones which are allowed by the constraint that the total energy is $E$. If you increase E, more states will be accessible, if you decrease $E \to 0$, almost all particles will get to rest, thus there will only be a limited number of accessible states. (This configuration under these constraints is called the microcanonical ensemble.)

Now comes the crucial part: equally probable.  Before we define it, let's us the question 'Can we derive the equiprobability from classical dynamics?'. Actually we can derive the converse of it. By that I mean, if for any instant of time, all the microstates of a system are equally probable, then we can show that it remains there for all time, i.e system is in equilibrium. But we are in search for other way around: If we have a thermal equilibrium, what is the probability distribution?

In classical dynamics, the phase space density, $\rho$, of the system obeys the Liouville Equation: \[\frac {\partial \rho}{\partial t} = \{H,\rho \} \tag{2}\] This expression is equal to zero in the equilibrium, i.e it is independent of time. \[\frac {\partial \rho}{\partial t} = \{H,\rho \} = 0 \tag{3}\] Thus we can say that $\rho$ is a constant of the motion. Furthermore, the eqn.$(3)$ suggests that $\rho$ must be a function of $H$ and possibly other constants of the motion.

Under equal probability condition, so far we only know that for our closed isolated system is on the energy shell defined by $(1)$. From only this information in hand, our equilibrium phase space density function($\rho_{eq}$) can only be the Dirac delta function of $H(q,p)$ peaked at the value of $E$: \[\rho_{eq}=\rho_{eq}(H) = \delta(H(q,p) - E) \tag{4} \] If we take the energy shell on which our system lies and divide it into little patches, eqn.($4$) does not distinguish one patch or another. This fact definitely shows that equal volume of patches in the phase space have equal probability. Thus our assumption of equal probability seems to be satisfied... But only with one caveat! That is, we need to specify a point in phase space in order to describe a state, but a point in a continuous space is a set of measure zero (the probability in continuum of finding any specific value is zero)! 

We need to do a further assumption that we have a finite resolution in order to give a finite equal probability to the patches in our phase space. We can not specify a point in a phase space, we don't have an infinite precision available to us. So we divide our phase space into unit cells; as long as our unit cell is finite, we are in a good shape. The problem of having uncountable many points in our phase space is tamed by discretizing, by giving a resolution. So we define a microstate as something in elementary volume element in phase space. But where does this finite resolution come from?

Nature comes to our rescue at this time and quantum mechanics tells us that due to the uncertainty principle, we only have a resolution up to a constant times $h$, the Planck's constant. This comes from the foundational fact that you can not specify the coordinate $q$ and its conjugate momentum $p$ at the same time, thus there is an inherent resolution in our $(q,p)$ phase space. We could construct whole theory of classical statistical mechanics pretending that there is a finite resolution in our phase space.

Finally, we can compute the number of accessible microstates $\Omega (E)$: \[\Omega (E) = \frac{\mathcal{V}(E)}{h^{3N}}\tag{5}\]where $ \mathcal{V}(E)$ is the volume of the accessible phase space and it is divided by unit cells of $h^{3N}$($h$ for each $(q,p)$ pair). The fact that, our probability density saying that $H(q,p) = E$ and nothing more, implies that each of these microstates has as much weight as another else, hence they are equally probable with the probability of \[\mathcal{P} = \frac{1}{\Omega (E)} \tag{6}\] Thus we have showed that the equiprobability of accessible microstates is the most crucial assumption in equilibrium statistical mechanics and it holds for a discrete phase space governed by the quantum mechanical restriction of finite resolution.

These notes have been composed with the help of the wonderful Classical Mechanics lectures of Prof.V.Balakrishnan on Youtube.

Saturday, April 18, 2015

Random Curiosities - I

There are always a lot of things that I encounter which make me think that I should write them on the blog but had no time to do so... In order to deal with it and keep some kind of an 'interesting reading list' for myself and for interested other, I decided to share these 'random curiosities' in a bulk manner. So here are this week's:

- The most significant and interesting thing that I've learnt this week was the so-called 'Benford's Law'. It is an unintuitive law that concerns the frequency distributions of the first digits of random-data. I will definitely elaborate on this one in one of my future post but there are a lot of great references where you can dive deep into it! For instance this article from Plus Maths really nails it down really nicely! If you have only 10 minutes, this video from Numberphile would work too..

- Moving on with statistics and there is this really curious article in Quanta Magazine called 'For Persi Diaconis’ Next Magic Trick'. It is about shuffling a deck of cards in order to make them 'random'. But the method proposed is not ordinary shuffling, which they have already proven that only seven shuffle is enough to achieve it, but 'smooshing' which is basically spreading the cards out on a table, swishing them around with your hands, and then gathering them up. Standford mathematician Persi Diaconis's proposal involves one of the fundamental ideas of statistical physics namely mixing. Quite an inspiring read...

- One of my past-time activities, especially in the busy traffic of Istanbul is to listen to interesting podcasts on the way and one of my favourite is ABC's 'Future Tense'. It is some kind of a 'futuristic' program but all the ideas have grounds on the present, it is not a sci-fi kind of future in a sense. One of the last week's program was themed 'Crowds and Motion' which deals with interesting problems and proposed solutions to urban planning and modeling the crowds as particles and simulations related with it. There was two interesting models, one with pedestrians walking in open space avoiding collisions and other one, namely "Steffen Method" of Airplane Boarding.. Further details and recorded audio is on the website.

- Web's first-class online science magazine Nautilus's this months theme is 'Dominoes' and their first week's installment includes a fascinating article called 'The Amazing, Autotuning Sandpile'. It is one of those 'simple to state but having jaw-dropping consequences' kind of problems of statistical physics. Simple mathematical model of a sand pile creating a very complex and dynamical patterns and behaviour, namely avalanches. The article is accompanied by nice pictures and simulations..

- Biology related, another Quanta Mag. article released this week was 'How Structure Arose in the Primordial Soup', telling the story of the search of the origins of fundamental transitions like formation of the cell, creation of the genetic code and making up the energy supplying metabolism functions way before the last common ancestor. The main ideas resonates with a Maynard Smith's wonderful book that I've been reading recently, The Origins of Life, and also an interesting method is proposed involving tracing the amino-acid code in the Tree of Life. An inspiring read...

- Academic tips and tricks side, I've read a very well-written compact blog post related to 'A few steps toward cleaner, better-organized code' which I desperately need more and more.

- And also a seemingly a trivia but one of the most complex thing that I've seen lately, solving a 17 X 17 Rubick's Cube. It is just beyond explanation...

Friday, April 10, 2015

Constructing a Cantor Set

Seemingly basic and simple things generally proves to be really really complex if we are talking about real numbers. We had a discussion of one of them in our recent class which I've been following this semester as a guest student. The class is an introduction course for beginner physics undergraduates to rather abstract and mathematical notions and related theoretical physics problems. For instance, we have started with a simple billiard problem and we ended up counting the real numbers and who knows where we will end up... Today our stop was famous Cantor Set.

We construct the Cantor Set by basically dividing the closed interval $[0,1]$ into equal 3 parts and leaving the middle part out. Then we continue the same process to the remaining two parts, i.e to the intervals $[0,\frac{1}{3}]$ and $[\frac{2}{3},1]$ and iterate for infinitely many times. After several steps the picture looks more or less like this:


Let us name the intervals left in each step as sets $C_0, C_1, C_2, ..., C_n$ where $n$ denote the steps starting with $0$ and runs to $n$.. For instance: $C_0 = [0,1]$ and $C_1 = [0,\frac{1}{3}] \cup [\frac{2}{3},1]$ and so on.. Cantor Set is the intersection of all these sets $C_n$: \[ C = \bigcap_{n} C_n \] We can also see that $C$ is a contracting set, i.e $C_0 \supset C_1 \supset C_2 ... \supset C_n \supset ...$. If we denote the length of the intervals at step $n$ as $\Delta n$, we can see that $\Delta n = \frac{1}{3^n}$. Also let $N_n$ be the number of intervals at step $n$, thus $N_n = 2^n$.



Now we have got all we need to calculate the total length of the interval at the $n^{th}$ step which is given by \[l_n = \Delta n \times N_n =  (\frac{2}{3})^n\]Remember that we are doing the procedure of cutting into three and discarding the middle one infinitely many times, hence when $n \to \infty$, $l_n \to 0$, i.e the lengths contracting to end up being zero at the limit $n \to \infty$. Does that mean that our Cantor Set is empty?

Obviously not, since we can see easily see that the end points of the intervals that we create every time stays in the set. As in the case of $0, 1, \frac{1}{3}, \frac{2}{3}, \frac{1}{9}, \frac{2}{9}$ etc... Actually we can define a map \[x_n = \frac{1}{3^n} \,\,\,\,\,\,\,   \mathbb{N} \to C \]so that we can map all of the end points with a natural number. This implies that $\mathbb{N} \preceq C$. Since $\mathbb{N}$ is dominated by $C$, cardinality of $C$ is at least $\aleph_0$. In the mean time, since $C \subset [0,1]$, the cardinality of $C$ can not exceed $\aleph_1$, which is the cardinality of $[0,1]$ interval itself (proof omitted). The critical question is whether the cardinality of $C$ is also $\aleph_1$ and the suprising answer to that is YES!

In order to prove that, we can start by labeling the intervals by 0,1 sequences so that after each division, the left remaining part is denoted by $0$ and the right one by $1$. Let us start with naming the interval $[0,1] = I$. Then we divide it by 3 and leave out the middle one, we label the left part $I_0$ and the right one $I_1$. Then we continue the iterations as in the figure below:


All the elements of the Cantor Set can be traced back to upper levels by looking whether it is on the left interval or the right one at each level. For instance if we look for $\frac{1}{4}$, it is in the interval $I$, then in $I_0$ (left), then $I_{01}$ (right), then $I_{010}$(left again)... It alternates between the left interval and the right one. We have denoted the left and right intervals with $0$ and $1$ respectively, thus we can claim that each 0-1 infinite sequence corresponds to an element of our Cantor Set. If we denote the all 0-1 sequences as $l_{\{0,1\}}$, then it can be shown that $card (l_{\{0,1\}}) = \aleph_1$ (proof omitted). Since we have previously said that the cardinality of $C$ can not exceed $\aleph_1$ so we can finally deduce that:
\[l_{\{0,1\}} \preceq C\] thus,
\[ card(C) = \aleph_1\]We showed that the cardinality of the Cantor Set is $\aleph_1$. This means that starting from the interval $[0,1]$, we constructed a set of intervals by removing infinitely many of the them in between, then joining them together we ended up with a set with the same "number of elements" we started with in the first place!

Tuesday, April 7, 2015

First Encounters with the Population Dynamics


It all started with a workshop; in fact long before that but I think this is a good opportunity to write the first post of the new blog and stop to make excuses not to write once and for all! That being said, the first post of the 'Turbulent Dynamics' is on a mini-workshop that has been held in Istanbul Center for Mathematical Sciences (IMBM) this week, which was called 'Population Dynamics'.

Population Dynamics is an area of research mainly focusing on biological systems such as ecology, evolution, infectious diseases and so on but clearly there are exceptions regarding its other interesting applications such as linguistics for example. It is a very multi-cultural area in a sense you encounter really different people who are motivated by problems not particularly in their main area but they are willing to apply their own expertise to shed new light and gain new perspectives on them. In our case, the problems generally arise from biological motivations and mostly physicist and applied mathematicians are more than willing to apply various modeling schemes to further understand the underpinnings and mechanisms of the problems in hand.

The mini-workshop was co-organized by Atilla Yılmaz from Bogazici University Mathematics Department and Muhittin Mungan from Physics  Department of the same university. There were four interesting small talks focusing on two main problems, namely The Stochastic Encounter-Mating Model and Ecological Niche Theory.

With his collaborator Onur Gün from Weierstrass Institute of Applied Analysis and Stochastics, Berlin, Germany, A. Yılmaz introduced their recent work on encounter mating model which is by their own words [1]:
... a new model of permanent monogamous pair formation in zoological populations comprised of k types of females and males, which unifies and generalizes the encounter-mating models of Gimelfarb (1988). In our model, animals randomly encounter members of the opposite sex at their so-called firing times to form temporary pairs which then become permanent if mating happens. Given the distributions of the firing times and the mating preferences upon encounter, which depend on the sex and the type of the animals, we analyze the contingency table Q(t) of permanent pair types at any time t.
Both of the talks were quiet rigorous in a sense they derived their results from the basics like classical Lotka-Volterra (LV) equations and imposing stochasticity on them. One of the key aspect of their work is its relation to panmixia, i.e random mating in a population, which is know to be one of the main assumptions in population genetics.

Other talk was given by M. Mungan on Plant-pollinator webs which investigates the relationship between the plants and their pollinators based on a real ecological field data. Again starting with modeling the relationship between the plants and various pollinators with LV equations, the stable configuration of the species abundances are sought and the model predictions and the real data is being compared. It was quiet inspiring to see that with such a sparse data, the model can predict the actual findings pretty well. It was also interesting for me to think about the field work and data-side of all these problems and the relationship between the models and the modeled ecology.

Final talk was reserved for our guest from Uruguay, Institute of Physics, Universidad de la Rep´ublica, Hugo Fort. He introduced the Niche Theory in general, giving many examples from various species and their interaction with other species and their environment. The main part of his talk was evolutionary game theory which regards the reproductive rates of the individuals (i.e their fitness) frequency dependent so that they can be modeled as a simple game with strategies being the species themselves and the payoffs are the interaction coefficients in LV system. Starting from the classical Prisoner's Dilemma game he presented the different simulation results in the parameter space of the game.

This compact mini-workshop gave me the opportunity to listen to interesting talks on topics which I am trying to learn during the last two months. I am pleased in a sense that I could follow the main ideas thoroughly and the key ideas and methods presented were the ones that I was more or less acquainted thanks to our reading group with one of my friend and M. Mungan beginning this semester. More on that in the future posts...

References:

[1] The stochastic encounter-mating model, Onur Gün, Atila Yılmaz, 2014, http://arxiv.org/abs/1408.5036