aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--README.md21
1 files changed, 19 insertions, 2 deletions
diff --git a/README.md b/README.md
index 6ae881a..16b12ff 100644
--- a/README.md
+++ b/README.md
@@ -60,5 +60,22 @@ Data in `res/` can be regenerated by issuing
from the main project directory.
-##Numerical integration##
-TODO \ No newline at end of file
+##Interpretation##
+This task was very vaguely described. How was I supposed to interpret the "polynomial order" argument in the context of a quadrature? Since Newton-Cotes quadratures come from integration of an interpolating polynomial, I assumed this would be the polynomial, the order of which is passed. As a result, polynomial order of 0 corresponds to rectangles method, order 1 - trapeze method and order 2 - Simpson method. I know this idea lacks consistency (rectangles method is an open quadrature while other 2 are closed), but these 3 were the quadratures required by the task. Applying gaussian quadratures is also equivalent to integrating an interpolating polynomial - just with different distribution of interpolation points. So 1-point quadrature corresponds to interpolating with degree 0 polynomial, 2-points quadrature - degree 1 polynomial, etc.
+This road of thinking IS extremely unintuitive and misleading, yet, this was the only way I could make use of this extra "polynomial order" parameter. Another idea was to ignore it as at least one other person did...
+I'm pretty sure what I did here is NOT what author of the task meant. But with such a broken task he provided, nothing better could be done.
+At least I used function pointers, which I believe we were supposed to do, so no one can say, that I made things easier for myself here...
+
+Another ugly thing is the way I pass the number of subintervals... The required interface of integrating functions was specified in the task without a parameter for the number of subintervals. Perhaps it should be there istead of this stupid "polynomial order"?
+
+In a real project I would obviously do things a different way.
+##Results analyzing##
+####Running time####
+I did everything on 2-core CPU. I am surprised there's such a huge time penalty for bigger number of images. Does creation and synchronization of 8 images take over 2.5 seconds, as `res/times` suggests!? When doing stuff in C with pthreads I easily had this number of threads create and die below 0.01 second! "He did something wrong here", one could say about me. Well, when we did coarrays on a lab class, code compiled under ifort would also take SECONDS to synchronize ;_;
+
+####Quadratures accuracy####
+When summing a lot of numbers, the order is important. If we sum 1 000 000 numbers one by one, we are likely to get higher error related to computer arithmetic, than if we sum numbers by 100 000 and add ten sums together. This suggests, that for high number of subintervals, program ran with more images should give better results (because each image first sums integrals over its corresponding subintervals and first image then adds up all the sums), right? Not in this case, because I employed a smarter adding technique. The results for different numbers of images still vary a little, but the error is on the same level.
+
+I also see some weird things, like 2-point gaussian quadratures being in many cases more accurate, than 3-point ones... I do not exclude the possibility of my own bugs here, but it's also worth noting, that gaussian quadrature using Legendre polynomials is designed for integration of polynomials and the only polynomial among our test functions is a 10-degree one, while the 3-point gaussian quadrature will only exactly integrate polynomials of up to 5 degree.
+
+One could write more about the results here, but I'm already bored enough and already got my grade anyway 😎 \ No newline at end of file