(no subject)

Sep. 17th, 2017 08:11 am
alexseanchai: Blue and purple lightning (Default)
[personal profile] alexseanchai
[personal profile] analise010 is doing a one-card draw, or more cards for the price of a coffee.

Do you want a letter?

Sep. 15th, 2017 09:16 pm
dingsi: Close-up of Norb from Angry Beavers cartoon show. (default2)
[personal profile] dingsi
Tell me you want to receive a letter by leaving a snail mail address below. (Comments are screened, of course!) If you don't mind, also mention something you like about getting letters. Do you love decorative stationary? Want a stamp with an animal on it? Or concerning content--tidbits about daily life, a favourite poem, an explanation of a German thing that always puzzled you? A nonsensical drawing?

Brought to you by Lotsa Spare Time and also I Bought Three Different Colours Of Ink That Languish In My Stationary Drawer, Together With Said Stationary.
moonplanet: Nymph, Odysseus and Trojan horse from my story at http://moonplanet.dreamwidth.org/38646.html (opzoeknaarodysseus)
[personal profile] moonplanet
(Ik heb geen/zo min mogelijk spoilers in mijn recensie geschreven)

Titel: Liefde & gelato (op Librarything, op Hebban.nl, op Goodreads)
Oorspronkelijke titel: Love & Gelato
Auteur: Jenna Evans Welch
Vertaler: Irene Paridaans
Taal: Nederlands, oorspronkelijk Engels
Serie: nee
Soort uitgave: paperback
Aantal pagina's: 318
Uitgever: Harper Collins
Jaar van publicatie: origineel Engels en Nederlands 2016, mijn editie 2016 (1e editie)
ISBN-nummer: 9789402714302
Trefwoorden: ouders, vrienden, Italië, liefde en natuurlijk ijs!
Waarom ging ik het lezen: Ik had de Young Adult tassenontwerpwedstrijd van de Van Piere boekhandel gewonnen en als prijs kreeg ik een tas vol boeken, waar dit boek ook in zat. Het leek me sowieso al een interessant boek, dus dat kwam mooi uit :)
Aanrader: Eigenlijk had ik van teveel mensen gehoord dat dit zo'n fantastisch boek was, dat ik een beetje teleurgesteld was... Het is wel een leuk boek, maar niet superspeciaal en fantastisch. Dus verwacht niet teveel, dan word je ook niet teleurgesteld! Het is wel leuk om te lezen zonder dat je er veel over na hoeft te denken.

Korte samenvatting:
De Amerikaanse Lina moet van haar moeder een zomer in Italië doorbrengen bij een man die ze nooit gekend heeft en haar vader schijnt te zijn. Daar aangekomen krijgt ze het dagboek van haar moeder in handen, waarin beschreven staat hoe haar moeder in Italië haar vader heeft leren kennen en wat er allemaal gebeurde, waardoor ze uiteindelijk in haar eentje naar Amerika is vertrokken en Lina haar vader niet eerder heeft leren kennen. Lina krijgt hulp bij het ontrafelen van het mysterie van de buurjongen.

Achterkanttekst:
Naar Italië ga je voor liefde en gelato, maar soms vind je meer dan dat...

Lina brengt de zomer door in Toscane, maar aanvankelijk is ze niet erg in de stemming voor de Italiaanse zon en het prachtige landschap. Eigenlijk is ze er alleen maar omdat het haar moeders laatste wens was dat ze naar Italië zou gaan om haar vader te leren kennen.

Maar dan krijgt ze een dagboek in handen, waarin haar moeder schrijft over haar tijd in Italië én haar geheime liefde. Vanaf de eerste pagina is Lina gefascineerd en wil ze de geheimen van het dagboek ontrafelen. Daarvoor heeft ze wel wat Italiaanse hulp nodig. Enter buurjongen Lorenzo...

Een prachtig verhaal waarin twee liefdesgeschiedenissen op een geraffineerde wijze met elkaar worden verweven.

Eerste alinea van hoofdstuk 1:
In de verte doemde het licht van het huis op, als een vuurtoren in een zee van grafstenen. Maar dit kon zijn huis toch zeker niet zijn? Dit was vast een of andere Italiaanse gewoonte. Altijd met nieuwelingen over een begraafplaats rijden. Op die manier maken ze kennis met de lokale cultuur. Ja, zoiets moest het wel wezen.
Ik verstrengelde mijn vingers op mijn schoot en toen het huis dichterbij kwam draaide mijn maag zich om. Het leek wel of je Jaws tevoorschijn zag komen uit de diepte van de oceaan. Maar dit was geen film. Dit was echt. En we hoefden nog maar één afslag. Geen paniek. Dit kan het niet zijn. Mama zou je heus niet naar een begraafplaats sturen. Ze zou je gewaarschuwd hebben. Ze zou -
Hij zette zijn richtingaanwijzer aan en alle lucht ontsnapte uit mijn longen. Ze heeft het gewoon niet verteld.
'Alles goed?'
Howard - ik geloof dat ik hem mijn vader moet noemen - keek naar me met een bezorgde uitdrukking op zijn gezicht. Waarschijnlijk omdat ik daarnet een piepgeluid maakte.
'Is dat jouw...?' Ik kon geen woorden vinden, dus ik moest wijzen.
'Eh... ja.' Hij aarzelde even en wees toen naar buiten. 'Lina, wist je dit niet? Dit allemaal?'

Over de eerste alinea:
Er is ook nog een proloog, maar omdat de lezer daarin direct wordt aangesproken, is dat geen goed voorbeeld voor de schrijfstijl.

Recensie:
Verhaal:
Het is geen verhaal waar je erg bij na hoeft te denken, maar dat maakt het wel een "zomers" boek. De reden dat Lina naar Italië gaat is niet erg vrolijk: de dood van haar moeder zorgt er wel voor dat Lina niet altijd evenveel kan genieten van alle leuke dingen in Italië. Toch zorgt haar moeder er ook voor, via haar dagboek, dat Lina juist wel allerlei mooie plekjes en momenten vindt in Florence.

Lina leest het dagboek in stukjes, waardoor ze steeds meer ontdekt over waarom haar moeder het zo leuk vond in Florence. Lina vindt Italië aan het begin helemaal niet leuk, maar dat komt ook doordat het niet haar eigen keus was om hierheen te gaan. Elke keer als haar moeder een nieuwe plaats beschrijft in het dagboek, wil Lina deze ook zien. Samen met Howard (aan wie ze niet wil vertellen dat ze het dagboek van haar moeder aan het lezen is) of de buurjongen Lorenzo (Ren), aan wie ze het dagboek wel heeft laten zien, bezoekt zij dezelfde plaatsen als haar moeder.

De dagboekstukjes tussendoor maken het verhaal wel spannender, omdat je niet alles in één keer te weten komt, maar in het echt zou ik wel verwachten dat Lina het dagboek in één keer uitgelezen zou hebben. Andere mensen die het dagboek lazen, hadden het binnen twee uur uit...

De twee thema's uit de titel komen duidelijk naar voren: Lina eet meerdere keren ijs in Italië en vindt dat heel erg lekker, maar de liefde speelt de grootste rol. Via het dagboek van haar moeder leert Lina haar vader en de andere vrienden van haar moeder kennen. Ondertussen maakt ze via haar buurjongen Ren ook kennis met de andere mensen die naar de internationale school gaan (al is het nu zomervakantie). En natuurlijk zit daar een knappe jongen tussen... Maar omdat Ren Lina het meeste helpt bij de zoektocht, brengt ze de meeste tijd samen met Ren door...

Zowel de liefdesverhalen van Lina's moeder als van Lina zelf komen geloofwaardig over. De andere personages, zoals Ren en de mensen in zijn vriendengroep, maar ook Howard, vond ik bijna interessanter om over te lezen dan Lina zelf, ook al maak je iedereen alleen maar mee vanuit Lina's oogpunt.

Schrijfstijl:
Lina vertelt haar verhaal in de ik-vorm, dus je leest ook haar gedachten.
Tussendoor leest Lina in het dagboek van haar moeder, dus dan vertelt zij haar verhaal in de ik-vorm. De dagboekteksten hebben een ander lettertype, dus de twee verhaallijnen zijn gemakkelijk uit elkaar te houden.

Spelfouten/Typefouten:
- Blz 33:
'Spreekt uw misschien Engels?' =
'Spreekt u misschien Engels?'
- Blz 35:
'Laten we deze kant opgaan. =
'Laten we deze kant opgaan.'
- Blz 182 heeft "stoeprand" afgebroken als "stoe-prand" en blz 313 heeft "stoeprandje" afgebroken als "stoe-prandje".

Conclusie:
Leuk boekje voor tussendoor, maar helaas niet zo fantastisch als je er met te hoge verwachtingen aan begint!

Herleesbaarheid:
Ik ga het niet nog een keer lezen.

Links:
- Deze recensie op Goodreads.
- Deze recensie op Hebban.nl.
moonplanet: Stargate Atlantis, slightly edited screenshot (stargate-atlantis)
[personal profile] moonplanet
Title: Getting Started with TensorFlow - Get up and running with the latest numerical computing library by Google and dive deeper into your data! (on Librarything, Goodreads)
Author: Giancarlo Zaccone
Language: English
Series: Packt Publishing - Community Experience Distilled
Format of publication: Published in several formats, but I'm reading the PDF-ebook.
Number of pages: 178
Publisher: Packt Publishing Ltd.
Year published: original 2016, my edition 2016 (1st edition)
ISBN number: 978-1-78646-857-4
Topics: Google TensorFlow, Python, machine learning, neural networks, deep learning
Reason for reading: It sounded interesting and it was available as a free ebook on the publisher's website for a limited time.
Recommended: To get an idea of what TensorFlow is, you can read this book, but it's not the first book I'd recommend. However, I do think there are better introductory books and/or online tutorials on this topic. The examples in this book are in Python. Another thing is that there are lots of sentences that sound a bit odd... Note: I did not test the code to see whether it compiled/worked.

Short summary:
This book contains lots of examples to get started with TensorFlow in Python.

Back cover text (from the publisher's website):
Google's TensorFlow engine, after much fanfare, has evolved in to a robust, user-friendly, and customizable, application-grade software library of machine learning (ML) code for numerical computation and neural networks.

This book takes you through the practical software implementation of various machine learning techniques with TensorFlow. In the first few chapters, you'll gain familiarity with the framework and perform the mathematical operations required for data analysis. As you progress further, you'll learn to implement various machine learning techniques such as classification, clustering, neural networks, and deep learning through practical examples.

By the end of this book, you’ll have gained hands-on experience of using TensorFlow and building classification, image recognition systems, language processing, and information retrieving systems for your application.

First three paragraphs of the Preface:
TensorFlow is an open source software library used to implement machine learning and deep learning systems.
Behind these two names are hidden a series of powerful algorithms that share a common challenge: to allow a computer to learn how to automatically recognize complex patterns and make the smartest decisions possible.
Machine learning algorithms are supervised or unsupervised; simplifying as much as possible, we can say that the biggest difference is that in supervised learning the programmer instructs the computer how to do something, whereas in unsupervised learning the computer will learn all by itself.

Review:
Content:
The preface mentions that the examples in this book are in Python, which makes sense as the author has also written another Python book for Packt Publishing.
The chapter summaries below have been copied from the preface as well (in italics), which are followed by my own comments.

Chapter 1: TensorFlow - Basic Concepts
Chapter 1, TensorFlow – Basic Concepts, contains general information on the structure of TensorFlow and the issues for which it was developed. It also provides the basic programming guidelines for the Python language and a first TensorFlow working session after the installation procedure. The chapter ends with a description of TensorBoard, a powerful tool for optimization and debugging.


The introduction on the basic concepts (machine learning and deep learning) is quite short and the introduction to Python is targeted at people already familiar with programming (in any language, it seems).
And the installation instructions for Windows are just: install VirtualBox, install Ubuntu in there and then follow the Linux installation instructions :P
After installing, the author shows a sample program in both text and screenshots of text, because those screenshots have syntax colouring. He explains in more detail what the program does as well (including "Data Flow Graph"), because there are the basic things of TensorFlow.
The author includes images explaining the concepts as well. TensorBoard is a tool to visualize and analyze the Data Flow Graph, which sounds very useful.

Chapter 2: Doing Math with TensorFlow
Chapter 2, Doing Math with TensorFlow, describes the ability of mathematical processing of TensorFlow. It covers programming examples on basic algebra up to partial differential equations. Also, the basic data structure in TensorFlow, the tensor, is explained.


As matrices are also tensors, the examples include those as well (including pictures to make the explanations clearer). The most visual examples are those of importing and rotating a jpeg, as well as Mandelbrot and Julia's sets, which show how to create and manipulate images. There are also a few sections about generating different kinds of random numbers and how to visualize them. The last part of the chapter is entirely about "partial differential equations" in TensorFlow, which includes a lot of graphs.
This chapter is mostly an introduction to some basic math functions for which you can use TensorFlow, including images to clarify the concepts (as there's not much explanation of the math in this chapter).

Chapter 3: Starting with Machine Learning
Chapter 3, Starting with Machine Learning, introduces some machine learning models. We start to implement the linear regression algorithm, which is concerned with modeling relationships between data. The main focus of the chapter is on solving two basic problems in machine learning; classification, that is, how to assign each new input to one of the possible given categories; and data clustering, which is the task of grouping a set of objects in such a way that objects in the same group are more similar to each other than to those in other groups.


This chapter covers several data mining algorithms, but mostly shows the tensorflow Python code and images of the results. For more extensive descriptions of these data mining algorithms, see one of the books in the "Similar books/recommendations" section below.

Chapter 4: Introducing Neural Networks
Chapter 4, Introducing Neural Networks, provides a quick and detailed introduction of neural networks. These are mathematical models that represent the interconnection between elements, the artificial neurons. They are mathematical constructs that to some extent mimic the properties of living neurons. Neural networks build the foundation on which rests the architecture of deep learning algorithms. Two basic types of neural nets are then implemented: the Single Layer Perceptron and the Multi Layer Perceptron for classification problems.


At the beginning, this chapter explains the concept of neural networks. However, most of this chapter contains examples with a lot of source code and some images. It would have been nice if they had added syntax colouring for the code to make it more easily readable. There is colour in the graphs, so why not in the code?

Chapter 5: Deep Learning
Chapter 5, Deep Learning, gives an overview of deep learning algorithms. Only in recent years has deep learning collected a large number of results considered unthinkable a few years ago. We’ll show how to implement two fundamental deep learning architectures, convolutional neural networks (CNN) and recurrent neural networks (RNN), for image recognition and speech translation problems respectively.


This chapter's topic is interesting, because the author now shows examples of using neural networks for two important problems. Again, lots of example code, but luckily also some images to clarify the things described in the text.
The second part of the chapter is not really an in-depth explanation about speech translation problems, but they build an LSTM network to predict the next word in an English sentence. Here, pseudocode is used instead of real code. It sounded like the most interesting topic, but the author did not go into great detail (guess I'll need to read another book for that!).

Chapter 6: GPU Programming and Serving with TensorFlow
Chapter 6, GPU Programming and Serving with TensorFlow, shows the TensorFlow facilities for GPU computing and introduces TensorFlow Serving, a high-performance open source serving system for machine learning models designed for production environments and optimized for TensorFlow.


This chapter is interesting, because it shows more kinds of things you can do with TensorFlow which you wouldn't immediately think of.

Writing style:
The writing style is quite factual and the author doesn't "joke around" or talks to the reader directly. He explains most things, but not in much detail. It reads easily, except for all the odd sentences mentioned below.

Spelling errors/typos:
- The link on page 5 (https://www.packtpub.com/sites/default/files/downloads/Bookname_ColorImages.pdf) does not work (you get a 404 page).
- The paragraph "TensorBoard's algorithms collapse nodes into high-level blocks and highlight groups with the same structures, while also separating out high-degree nodes. The visualization tool is
also interactive: the users can pan, zoom in, expand, and collapse the nodes." is both present on page 28 at the bottom and halfway on page 29.
- Page 30: startig tensorboard on port 6006 should be starting tensorboard on port 6006
- Page 91: This takes the x_values and y_values vectors of the training set, and the assignemnt_values vector, to draw the clusters. should be This takes the x_values and y_values vectors of the training set, and the assignment_values vector, to draw the clusters.
- Page 100: The process is repeated, resubmitting to the network, in a random order, all the examples of the training set until the error made on the entire training set is not less than a certain threshold, or until the maximum number of iterations is
reached.
should be The process is repeated, resubmitting to the network, in a random order, all the examples of the training set until the error made on the entire training set is less than a certain threshold, or until the maximum number of iterations is
reached.

- Page 102: Different metrics calculated degree of error between the desired output and the training data outputs. should be Different metrics calculate degree of error between the desired output and the training data outputs.
- Page 111: The logits are the unnormalized log probabilities output the model (the values output before the softmax normalization is applied to them). should probably be: The logits are the unnormalized log probabilities output by the model (the values output before the softmax normalization is applied to them).
- On page 118 in the code, there's the line all_x contiene tutti i punti. This makes no sense as code, so it should be commented: # all_x contiene tutti i punti. Also, was this book actually translated from Italian and did they forget to translate this sentence?
- Page 124: Deep learning techniques are a crucial step forward taken by the machine learning researchers in recent decades, having provided successful results ever seen before in many applications, such as image recognition and speech recognition. should be: Deep learning techniques are a crucial step forward taken by the machine learning researchers in recent decades, having provided successful results never seen before in many applications, such as image recognition and speech recognition.
- Page 126: Each unit transforms its input to improve its properties to select and amplify only the relevant aspects for classification purposes, and its invariance, namely its propensity to ignore the irrelevant aspects and negligible. should probably be Each unit transforms its input to improve its properties to select and amplify only the relevant aspects for classification purposes, and its invariance, namely its propensity to ignore the irrelevant and negligible aspects.
- Page 128: Each neuron of the first subsequent layer connectsonly some of the input neurons. should be: Each neuron of the first subsequent layer connects only some of the input neurons.
- Page 128: The reason for the local connectivity resides in the fact that in data of arrays form, such as the images, the values are often highly correlated, forming distinct groups of data that can be easily identified. should be: The reason for the local connectivity resides in the fact that in data of array form, such as the images, the values are often highly correlated, forming distinct groups of data that can be easily identified.
- Page 128: Each connection learns a weight (so it will get 5×5 = 25), instead of the hidden neuron with an associated connecting learns a total bias, then we are going to connect the regions to individual neurons by performing a shift from time to time, as in the following figures: should be something like: Each connection learns a weight (so it will get 5×5 = 25), so instead of the hidden neuron with an associated connection learning a total bias, we are going to connect the regions to individual neurons by performing a shift from time to time, as in the following figures:
- Page 131: So we have three feature maps of size 24×24 for the first hidden layer, and the second hidden layer will be of size 12×12, since we are assuming that for every unit summarize a 2×2 region. should be: So we have three feature maps of size 24×24 for the first hidden layer, and the second hidden layer will be of size 12×12, since we are assuming that for every unit we summarize a 2×2 region.
- Page 133: To reduce the over fitting, we apply the dropout technique. should be: To reduce overfitting, we apply the dropout technique.
- Page 135: While tf.nn.relu is the Relu function (Rectified linear unit) that is the usual activation function in the hidden layer of a deep neural network. should be: tf.nn.relu is the Relu function (Rectified linear unit) that is the usual activation function in the hidden layer of a deep neural network.
- Page 144: Furthermore, an RNN performs the same computation at each instant, on multiple of the same sequence in input. should probably be: Furthermore, an RNN performs the same computation at each instant, on multiple elements of the same sequence in the input.
- Page 145: It is also evident how you can train networks of this type, in fact, because the parameters are shared for each instant of time, the gradient calculated for each output depends not only from the current computation but also from the previous ones. should be: It is also evident how you can train networks of this type, in fact, because the parameters are shared for each instant of time, the gradient calculated for each output depends not only on the current computation but also on the previous ones.
- On page 133 the author writes "Relu" and on page 145 he writes "ReLu"... According to Wikipedia it should be written as ReLU, but I've also seen it written as ReLu.
- On page 147, there's the sentence: It comprises of the following two files:. There's an interesting discussion about "it comprises of" on this page and this page has some grammar rules on this word. I also agree it sounds quite odd... So it would probably be better to rewrite this sentence as It comprises the following two files:.
- Page 148: The dataset is preprocessed and contains 10000, different words, including the end-of-sentence marker and a special symbol (<unk>) for rare words. should be: The dataset is preprocessed and contains 10000 different words, including the end-of-sentence marker and a special symbol (<unk>) for rare words.
- Page 148: In the course of computation, after each word to examine the state value is updated with the output value, following is the pseudocode list of the implemented steps: is a confusing sentence. A better option would be: In the course of computation, after each word to examine, the state value is updated with the output value. The following is the pseudocode list of the implemented steps:
- Page 149: The part between brackets in It computes the average per-word perplexity, its value measures the accuracy of the model (to lower values correspond best performance) and will be monitored throughout the training process. would probably sound better as (lower values correspond to better performance).
- Page 151: In this chapter, we gave an overview of deep learning techniques, examining two of the deep learning architectures in use, CNN and RNNs. should be In this chapter, we gave an overview of deep learning techniques, examining two of the deep learning architectures in use, CNNs and RNNs.
- Page 152: The GPU programming model is a programming strategy that consists of replacing a CPU to a GPU to accelerate the execution of a variety of applications. would sound better as The GPU programming model is a programming strategy that consists of replacing a CPU with a GPU to accelerate the execution of a variety of applications.
- Page 153: The reason for such specific part to rely on two GPU is up to the speed provided by the GPU architecture. should be: The reason for such specific parts to rely on the GPU is because of the speed provided by the GPU architecture.
- Page 153: TensorFlow possesses capabilities that you can take advantage of this programming model (if you have a NVIDIA GPU), the package version that supports GPU requires Cuda Toolkit 7.0 and 6.5 CUDNN V2. should be: TensorFlow possesses capabilities so you can take advantage of this programming model (if you have a NVIDIA GPU). The package version that supports GPU requires Cuda Toolkit 7.0 and 6.5 CUDNN V2.
- Page 154: For the installation of Cuda environment, we suggest referring the Cuda installation page: should be: For the installation of Cuda environment, we suggest referring to the Cuda installation page:
- Page 154: To find out which device is assigned to our operations and tensioners need to create the session with the option of setting log_device_placement instantiated to True. should be: To find out which device is assigned to our operations and tensioners, we need to create the session with the option of setting log_device_placement instantiated to True.
- Page 155: If you have more than a GPU, you can directly select it setting allow_soft_placement to
True in the configuration option when creating the session.
should probably be If you have more than one GPU, you can directly select it by setting allow_soft_placement to True in the configuration option when creating the session.

Conclusion:
Each topic is introduced in a quite concise way, so it's useful to already have some background knowledge of datamining topics. The author spends most of his time showing example code and talking about the code.
There were also quite a few confusing sentences, which I listed above (including what I think the sentences should look like). An extra proofreader would probably have been a good idea... The list has gotten quite long.

Rereadability:
I'm not going to read it again, but I am planning to read more books on this topic!

Related links:
- While reading chapter 4, I came across this article about neural networks in JavaScript.
- A clear tutorial on TensorFlow in Python which covers similar topics as this book. Includes some nice links as well.
- This review on Goodreads.

Similar books / recommendations:
- Introduction to Data Mining by Pang-Ning Tan, Michael Steinbach, Vipin Kumar
- Foundations of statistical natural language processing by Christopher D. Manning and Hinrich Schütze