Sunday 15 November 2015

iPhone or Canon 5D, what are the true differences to take a good picture?

Hi,

It has been a long time I wanted to write this post which could sound a bit provocative.

I am a passionate photographer (find my gallery on http://morandstuder.com or http://www.artlimited.net/morandstuder) and I have had the chance to use very good cameras, and I usually carry kilos of lenses on my back.
But having an heavy DSLR is not always possible for different reasons.
So, I have tried some compacts and it happens to me to use my smartphone as a camera.

You have probably notice the heavy advertising of Apple on iPhone photo quality, and I must admit I am sometimes impressed by what I get from my smartphone (iPhone or other).
I have read thousands of tests, including DxO ones, but nothing really operational and so wide than placing an iPhone against a 5D mark iii.

So, I decided to make it myself and to add a compact Canon G7X, which is a high end compact, light and pocketable and not so expensive (much cheaper than an iPhone!).
The comparison here is just on the final picture quality, no consideration of ergonomics, autofocus, etc.

The iPhone has a fixed focal lens corresponding to 29 mm, the G7X has an integrated 24-100mm f1.8 - 2.8 zoom, and the 5D is equipped here with a good but rather standard 24-105mm f4.

For those found of pixels, the iPhone 6 has 8 M pixels of 1.5 µ on an 1/3" sensor (3.6 x 4.8 mm), the G7X 20.2 MPixels of 2.4 µ on a 1" sensor (8.8 x 13.2 mm), and the 5D Miii 22.3 M pixels of 6.3 µ on a large full frame (24 x 36 mm) sensor. The Canon shoot in Raw and are converted in jpg in Lightroom with no or little corrections, detailed below.


So, what are the results?

1. Standard outdoor landscape with good conditions:

The iPhone is impressive compared to the big ones.
I have taken the pictures at the iPhone focal lens, which correspond to 29 mm.


That is the beautiful and creative view that I take as a benchmark:

 iPhone



G7X



5D Miii



You can see the difference, especially due to lower pixel definition but it is really OK.
You don't have the 24 mm wide angle, which is an issue for me, but it is OK in general.
It is noticeable that the 5D with 24-105 has high chromatic aberration but it is well corrected by Lightroom.

2. Low light

Again, taken at 29 mm equivalent and a beautiful and creative picture!



iPhone

Here the iPhone is a disaster. You cannot read anything, the texture of the fabric is not visible. It is taken at 640 ISO, 1/17 s which is very slow (high risk of movement blur). I have tried with another app giving TIFF files, but, although 31 MB file (1.5 x more than D5D RAW !!!), the "quality" is the same.

G7X


The G7X is pretty impressive taking advantage of its fast lens (f2). At 500 ISO, it allows 1/30 s which is much better. The quality is very good.

5D

Here, the quality is one step higher, although the relatively slow lens (f4). I didn't change the speed which is a comfortable 1/40s, resulting in 5000 ISO and a impressive quality in terms of details.

Zoom

The iPhone has no zoom...
So, when you want to take a portrait, children or even some landscape, not speaking of sport or wild life, it is another story.
Here is an example taken at 100 mm focal length which is typical for portrait and allowed by the two standard zooms.

The iPhone equivalent "digital zoom". Event with this small size, it is clearly awful


And the crop

5D : perfect

 G7X : very good


iPhone, no comment



Conclusion
I think it is pretty obvious with the pictures.
If you want to do "basic" landscape, you can make very good pictures with an iPhone. You are stuck with a 29 mm but is a good length for many situations. And the panorama feature is awesome, giving hi-res panoramic pictures.
If you want to make portrait or low light, forget it. You don't need to be a purist to see that even on Facebook, you have a risk not to recognise your friends! If you want to have a very compact and not so expensive, reactive and easy to use, go for the G7X (you can have it plus a very decent Android for the price of an iPhone). You can look at Sony or Fuji, but I think the canon is a very good compromise. You can also have a decent SLR for the price. Then if you want superlative quality, top view finder and autofocus, and advanced functions, the 5D is the king.
Hi hope these concrete pictures help.

Nicest photos here:
http://morandstuder.com/

;-)


Monday 9 November 2015

De l’informatique au numérique : rupture ou continuité?

Il est admis dans le langage courant que nous sommes entrés dans un « nouveau » monde, « digital »
ou « numérique ». Des directeurs Digitaux, souvent issus des métiers, cohabitent avec des DSI.
Les dépenses informatiques sont de plus en plus engagées par les métiers. Les sites web fusionnent avec,
voire même remplacent les systèmes internes… L’informatique se diffuse dans l’entreprise, se déconcentre, se décentralise…
Mais, qu ’ est-ce qui a finalement changé ? Internet ne date pas d’hier, l’informatique non plus. Les smartphones ? Les objets connectés ? De nouvelles technologies de développement ? La démocratisation massive des technologies ? L’ouverture et la standardisation de « l’informatique » ? Le changement de culture ? Probablement un peu de tout cela.

Le Web et l’information communautaire

Il n’y a pas si longtemps que cela, peut-être même avez-vous, comme moi, connu ce temps, pour faire une recherche documentaire ou une étude de marché, il fallait se déplacer, aller dans des bibliothèques, s’y inscrire, emprunter, acheter des livres, etc. Aujourd’hui, avec une connexion internet, la connaissance est à portée de main, instantanément. Instantanément, et souvent gratuitement. Gratuitement, grâce à l’UGC (User Generated Content) – le web 2.0 – et à l’open source. Une nouvelle économie, basée sur des coûts fixes relativement faibles et des couts marginaux nuls.
Trois modèles permettent d’amortir les coûts fixes :
1. le premier usage (j’ai créé ceci pour mon propre usage ou mon plaisir, par goût d’aider ou d’appartenir à une communauté)
2. quelques clients payants (modèle freemium)
3. un grand nombre de « clients » à faible revenu unitaire (réputation, publicité)
Les coûts marginaux nuls, grâce à la diffusion automatisée et massive sur internet, permettent au plus grand nombre un accès gratuit à l’information.
Cette révolution de la création et de la mise à disposition des contenus à destination des humains s’est aussi opérée dans les contenus « machine to machine ».

La base installée de terminaux et la facilité de déploiement

Le hardware n’est (presque) plus une question aujourd’hui. Les clients sont équipés de mobiles, avec un navigateur web et/ou un app store, des capacités de géolocalisation, des capteurs multiples, etc. Les collaborateurs ont au minimum un navigateur web, souvent un mobile avec les mêmes capacités. Dès lors, déployer un service n’est plus que logiciel, avec une standardisation et une facilité de déploiement très ou relativement élevées selon que l’on choisit l’option pur web ou application native. Plus de question d’équipement, de compatibilité, de distribution, de déploiement, de mise à jour, ni même d’infrastructure avec les possibilités offertes par le « cloud ». Je suis capable de penser, développer, tester et déployer un service en très peu de temps.

L’automatisation et les architectures web

Le Web n’a pas inventé les réseaux, ni les interfaces. Les réseaux privés d’entreprise existent depuis longtemps, et les standards type EDI (1) ont permis de développer beaucoup d’interfaces entre les entreprises. Cependant, la mise en œuvre d’interfaces a souvent été compliquée. Accès direct dans les bases de données, échanges de fichiers, firewalls, normes, transcodification, ETL (2) etc. ont donné des sueurs froides à plus d’un DSI.
Le web a permis l’émergence de standards plus universels et plus simples, notamment via les concepts d’API (3) s’appuyant sur les modèles résilients et éprouvés du web (4) accessibles durablement et en libre-service. Est-ce pour autant une question technique ? Non, pas essentiellement. S’il s’agissait juste d’un savoir-faire technique, alors la meilleure approche consisterait à mutualiser ces compétences dans des sociétés auprès desquelles les autres sous-traiteraient la réalisation des services. Mais on constate que les sociétés innovantes et qui créent de la valeur sur le marché font toutes leurs développements en interne. C’est donc une question plus profonde, plus culturelle que technique.

La culture « ouverte »

Grâce à l’ubiquité offerte par nos réseaux d’une part, et par la mutualisation des infrastructures d’autre part – le Cloud, les APIs publiques – qui abaissent à presque zéro le coût du ticket d’entrée à un marché mondial, le monde des « services » web connait la même révolution que l’UGC de type Wikipédia. Le seul investissement qu’il reste à faire pour mettre en œuvre un service, une idée, c’est le développement logiciel. Les mêmes moteurs ont été à l’œuvre, mais n’ont pu fonctionner que dans un changement complet de culture :
1. Modèle du premier usage : j’ai créé un service pour mon usage interne, avec en tête de le rendre universel, et je le mets à disposition de tous. Ce modèle a vu le jour notamment par la révolution de l’architecture des systèmes internes des entreprises, rendue célèbre par Amazon, en client – fournisseur, dialoguant par API et web services en interne. Ensuite, internet permet l’interconnexion immédiate de ces systèmes entre les différentes entreprises.
2. Modèle Freemium : modèle des distributions open source type Linux Redhat, Cloudera…
Le modèle de l’open source a été rendu possible par les communautés de développeurs s’échangeant des astuces, informations et portions de code sur internet
3. Modèle de masse: applications mobiles, modèle Google. Ce modèle a été rendu possible par la portée mondiale d’internet, permettant de toucher instantanément un nombre considérable de personnes, et les magasins d’application, permettant de distribuer ces programmes facilement.
Cette culture est ancrée dans l’esprit de la Silicon Valley pour qui le réseau vaut plus que l’individu, et pour qui la valeur vient de l’échange.
Ces changements bouleversent tout : l’innovation open source est beaucoup plus rapide, et surtout l’interconnexion permet de réutiliser et de capitaliser sur les systèmes tiers avec une facilité déconcertante. Auparavant, pour utiliser un système tiers (une cartographie, des données personnelles…), il fallait contacter l’entreprise, établir un partenariat, négocier, évaluer les possibilités de connexion, etc. Aujourd’hui, on crée un compte utilisateur / développeur, et c’est fait. Plus d’humain, plus de contrat, plus de viscosité, c’est la culture OTT (Over The Top).
C’est ainsi que le développement logiciel a joué un rôle clé dans la révolution digitale, non seulement par définition (le digital – ou numérique – est ce qui a basculé dans le monde logiciel), mais aussi parce que le développement logiciel est l’arme principale de remise en cause de l’ordre établi – la révolution – qui permet à de nouveaux entrants de s’installer partout, très vite, et de court-circuiter les acteurs en place.
Cette facilité se retrouve dans des innovations telles que la publication à compte d’auteurs (e book, blogs, ou impression en micro séries) et retombe dans le réel avec l’impression 3D.

Le développement agile, le « Test & Learn »

Un autre pan de la révolution digitale est la culture « agile », là aussi née dans le développement logiciel des « pure players », et généralisée à l’ensemble de l’économie : marketing (A/B testing…), communication et même production (micro séries) et développement général de l’entreprise (lean startup).
La source en est l’anti-taylorisation comme modèle. En effet, et contrairement aux idées reçues, le développement est une activité de conception, pas de production. En informatique, la production est assurée par des ordinateurs, très dociles et fiables. Il ne reste aux humains que la conception. Or faire une bonne conception nécessite d’avoir une vue d’ensemble, de comprendre les enjeux, de beaucoup échanger. Cela rend le développement indissociable du produit, et cela explique pourquoi le bon développeur doit être polyvalent et pourquoi il n’est pas pour autant interchangeable.
Exit les cahiers des charges, les études d’opportunités, les plans à 5 ans, qui prennent plus de temps à faire qu’à exécuter, et qui sont toujours faux.
Approche par petit pas, « Test & Learn », décentralisation et agilité ont fait leurs preuves. Quand on ne répartit plus le travail entre ouvriers spécialisés sous la supervision d’une planification globale centralisée, mais que l’on découpe les produits en fonctionnalités disjointes portées par des petites équipes pluridisciplinaires indépendantes, on va beaucoup, beaucoup plus vite.

L’entreprise Numérique : une vraie rupture

C’est tout cela qui constitue une entreprise « numérique » : réponses à des besoins anciens rendues faciles grâce au numérique, nouvelles offres, changement de business models, mais aussi et surtout culture ouverte, tournée vers l’innovation (coûts fixes puis coût marginal faible ou nul), la collecte de données, le partage, l’interconnexion et l’échange de données, l’agilité (« test and learn »)…
Mais alors, pourquoi l’adoption de cette culture n’est-elle pas plus massive ? Sans doute parce que la séparation des rôles de décision et d’exécution est au cœur des organisations traditionnelles. Abdiquer d’un pouvoir central sur les produits et accepter de le remettre, même de façon partielle, entre les mains des artisans qui les fabriquent, est tout sauf naturel.
Yves Christol (VP Software Development – Orange)
Morand Studer (Partner – eleven)

(1) EDI: Echange de données informatisé, norme et systèmes permettant d’échanger des commandes par exemple.
(2) ETL : Extract, Transform & Load, système permettant de faciliter les interfaces
(3) API : Application Programming Interface, interface prévue dans un système pour permettre l’échange de données ou d’ordres
(4) En particulier sur le protocole HTTP et sur un modèle d’interface sans état (Rest)

Thursday 6 June 2013

Le monopole naturel est il compatible avec internet?


Quels sont les nouveaux monopoles numériques ? Naturels, bénéfiques ou insupportables ?


 


Le monopole (du grec monos signifiant « un » et polein signifiant « vendre ») est, au sens strict, une situation dans laquelle un offreur a l'exclusivité sur un produit ou un service donné à une multitude d’acheteurs.1
Dans la culture populaire, le monopole est associé aux combats menés pour les démanteler. Ces monopoles étaient souvent liés aux économies d’échelle « évidentes » : on ne va pas faire deux réseaux de distribution d’eau (monopole « naturel »). La théorie économique a longtemps débattu sur leur bienfait global, avec des théories célèbres comme le monopole contestable, théorie développée à l'occasion du procès anti trust Bell. Pour autant, certains monopoles ont existés très longtemps sans apparaitre comme néfastes : c'étaient essentiellement des monopoles d'infrastructures, souvent liés une volonté politique : train, route, électricité, télécommunication, …

La fin des monopoles naturels

Les évolutions récentes ont mis fin à la plupart de ces monopoles par voies légales ou économiques. Certains demeurent, notamment au niveau des infrastructures (boucle locale, transport de l'électricité, chemins de fer).  Mais même dans les infrastructures de télécommunication, la concurrence est presque partout: l'accès au client final (boucle locale en cuivre) n'est plus monopolistique grâce à la radio (WIMAX, satellite, ...), la fibre ou le câble. Par ailleurs, les réseaux mobiles et les infrastructures lourdes ("backbone") ont montré que le monopole n'avait pas de sens économiquement, car il y avait la place pour une multiplication des infrastructures.

Le monopole "naturel" ne semble donc pas avoir la cote aujourd'hui... Mais en même temps, le monopole investit d’autres domaines, prenant de nouvelles formes, que nous allons essayer de décrypter.

Par exemple, le monopole de standard.
Dans les technologies, la guerre fait rage pour l'emporter, et le perdant perd en général tout: PAL contre SECAM,  Blu Ray contre DVD HD, etc. C'est le monopole "naturel" actuel, qui semble irrésistible. Les tentatives d'imposer un format "propriétaire" au bénéfice d'une position forte se sont généralement soldés par des échecs, comme  Sony avec sa Memory Stick face à la Carte SD. Seul Apple y arrive plus ou moins, avec Firewire par exemple.
                                                                        

L’émergence des nouveaux monopoles numériques

L’apparition d’internet et le basculement vers une société de l’information ont fait évoluer cette notion de monopole : de nouveaux systèmes monopolistiques apparaissent, plus flous. Nous les avons classés du plus « réel »au plus virtuel.
1.       monopole de plateforme
2.       monopole d'ergonomie
3.       monopole de réseau (social)
4.       monopole de recommandation


1.      Monopole de plateforme
C'est le monopole de standard, renforcé par un écosystème massif et captif. Le Système d'exploitation informatique (OS : Windows, Mac iOS, Android ; Linux…) en est l’archétype. Les développements réalisés en surcouche constituent un tel investissement, de la part de développeurs et des clients, que changer n’est quasiment plus possible, et qu’il n’y a pas de place pour de nombreux acteurs. Microsoft a l'a exploité avec génie, avec son système d'exploitation (DOS puis Windows) et ses logiciels (Office). C'est un monopole de compatibilité: un fichier Excel peut être lu par le plus grand nombre;  et le fait de l’utiliser me garantit que mon modèle pourra être compris par tout le monde. Adobe a de son côté réussi à imposer son format Acrobat (pdf) et aussi Photoshop et Illustrator dans la création graphique.  Les concurrents n'arrivent pas à rivaliser; vous devez pouvoir lire un pdf, psd ou ai (abréviations respectives de ces formats) pour être crédible. La guerre est (encore) ouverte à propos de Flash, mais sa relative lourdeur, la résistance d'Apple sur les mobiles et l'émergence du HTML 5 semble sonner doucement son glas. Ce format web, ainsi que des formats d’échange (XML…) feront-ils disparaitre cette forme de monopole? On nous promet depuis un certain temps que la surcouche "web" et que le "cloud" permettront de lire tout et n'importe quoi à partir de n'importe quel poste doté d'un navigateur web à jours. Cependant, il n’y a qu’à voir l’écart entre la version Windows et Mac d’Excel pour se dire que ce rêve est encore loin de la réalité, et déjà mis à mal par les nouvelles plateformes web (Cf. 3).

2.       Monopole d'ergonomie
Il est plus souple que le précédent, mais néanmoins plus puissant. Même si un logiciel peut lire le format Photoshop, l’investissement en formation est tel que j’ai intérêt à passer mes heures à me former au "standard" : ma valeur en sera augmentée. Le coût d'un logiciel est finalement faible par rapport au temps nécessaire à le maitriser, et choisir un outsider cantonne généralement à un usage basique. Et si la compatibilité de plateforme est contournable techniquement, la double formation ne l’est pas. Le gain progresse très rapidement avec le caractère monopolistique: facilité à avoir des formations, des conseils, "évidence" des solutions choisies.  Là encore, Microsoft et Adobe ont réussi dans ce domaine, tandis qu'Apple a réussi ce tour de force avec l'iPhone dont les choix d'ergonomie, qui étaient sûrement judicieux, sont aujourd'hui des "évidences".

3.       Monopole de réseau (social)
Un cran plus moderne, le monopole de réseau est le combat d'aujourd'hui pour accéder au client. Il est du même niveau que le précédent, mais l'investissement du "consommateur" est dans la construction de son réseau et de son contenu et non de sa compétence. Facebook, Google +, MySpace, Skype, Twitter, LinkedIn, BBM, Flicker… tous les sites de publication convergent vers des fonctionnalités communes: un identifiant, un réseau (des amis, qu'il faut contacter, classer, etc.), des communications publiques (statut, "like", etc.), des communications privées (messageries), des contenus (photos, musique), des applications, etc. L’internaute peut difficilement publier sur de nombreux supports, et en suivre autant. De l’autre côté, l'enjeu du réseau est d'avoir une fidélité et une connaissance des adhérents pour pouvoir leur "offrir" de la publicité, et de re-créer un monopole de plateforme via les applications (jeux sur Facebook…) qui pourra être monétisé.

4.       Monopole de recommandation, basé sur la connaissance client
Le dernier monopole, moins visible, est la recommandation. Il y a deux façons de faire des recommandations : soit je vous pose des questions, soit je connais déjà vos préférences. La deuxième est plus rapide, et souvent plus pertinente. Mais pour connaître, il faut être identifié, et du temps. D’où un certain monopole « naturel » là encore. Si vous cherchez des recommandations cinématographiques, vous avez intérêt à aller toujours sur le même site pour noter vos gouts. Et l’enjeu de coupler la recommandation à la vente est évident, pour le vendeur comme pour l’acheteur. Coup d’après, si le vendeur connait vos gouts cinématographiques, il a de bonnes chances de pouvoir vous conseiller des livres, voire de la musique et ainsi de suite, grâce à la segmentation et la recommandation croisée. Amazon a fait son succès dessus. Quant à Google, sa stratégie est de proposer des services nécessitant une identification (Gmail) pour pouvoir connaitre l’internaute et ainsi proposer des résultats affinés et des publicités ciblées lors des recherches « simples ».

Alors, monopole naturel, bénéfique ou insupportable ?

Ce qui est intéressant, c'est que tous ces monopoles sont à la source dans l'intérêt du consommateur: du service public de la poste à Facebook, monopole signifie efficacité, du moins en théorie.
Les risques ne sont pas nouveaux, ils s'expriment juste différemment. Une dérive classique est l'inefficacité pour cause de non concurrence. Cette dérive est moins susceptible d’arriver aujourd’hui en raison de l’ouverture et de la mondialisation de l’internet. L’autre risque est bien évidemment l’abus de position dominante.
Laissons les questions légales à nos amis juristes, mais l’abus est caractérisé notamment lorsqu’il y  transfert d’un monopole à l’autre : j’accepte le monopole de Microsoft sur les OS (monopole 1) mais lorsqu’il me pousse fortement son moteur de recherche (monopole 4) ; ça ne devient plus supportable. J’accepte de me connecter à Google pour bénéficier des monopoles 3 ou 4, mais lorsqu’il me pousse son navigateur (monopole 1), ce n’est plus acceptable.



1 : source wikipédia



Wednesday 17 October 2012

Did you say agile?


Did you say agile?

What can we learn from the latest evolutions of software organization?


The recent success stories in the new economy are often small start-ups that begun from scratch in their "garage", working with agile processes. "Small is beautiful" was their motto.
The interesting question is to know if these are only exceptions due to some brilliant founders, or if there are lessons that can be learned from it and applied to larger companies’ organizations. It is interesting how companies like Google, Facebook or Amazon keep being flexible and innovative despite a consequent size (32 000, 3 000, 22 000 employees respectively).
At a time when "offshore" is as fashion as "lean", how to adjust organization and process to reach maximal efficiency?

When 1 is better than 10

The famous iPhone software (operating system) was developed by 60 developers, while Motorola has not been able to develop a competing operating system with 1 500 people: the quality of developers cannot be compensated by the quantity. Besides, an oversized team is often counterproductive. We saw projects taking several months of delay for 15 man days of development only.
Why? First, at the simple developer’s level: "a good developer is worth 10 mediocre" (study of Sackman, Erickson and Grant). Good developers code better and faster. Code quality impacts further development phases: testing, debugging, maintenance and upgrade. Then, at the team level: skills or incompetence are leveraged because the actions are not parallel (where a mistake in an action would only impact its own results) but interrelated:  an error impacts everyone and delays the entire project.
These laws apply to many other sectors, as long as we are not in pure execution: craft industry, creation, analysis, etc. Management must be adapted and enable initiative taking. Indeed, pure execution gets much better along with a "military" management, where a clear and detailed executable order is dictated to the workers. As soon as we expect some initiatives from the workers, we are not in pure execution. Initiatives are expected from all the managers, but also from the technicians and the other populations which are not pure executors, and in a broader sense, to any big project.

Simplicity and lightness

Let’s focus back on the software to understand what made these teams efficient. First we have to admit that describing what a software shall do (the so-called specification) can be as complicated as to actually coding it! Besides, the specifications are always subject to interpretation, thus, always false (questionable/incomplete/not precise enough). This assessment leads to limit as much as possible the phase of specification and to merge it with the development. But it is not about dashing first and thinking afterward either. It is simply about limiting the scope of the specifications to the overall architecture and interfaces, and then cutting into coherent pieces. Thanks to these methods, Facebook publishes a new version every week. Firefox has reduced the delivery time for a new version from several months to six weeks.
What tools to set to achieve this?
The teams are small and accountable for the delivery, with a functionality manager (product owner), a unique representative of the client, a Scrum Master aiming at facilitating cooperation, supporting the team for its external relations and reliving all the little pains not directly related to the product. And last but not least, a small competent and accountable team of developers with a broader vision of the project and a witty sense of initiative. Just the opposite of what we can find in certain projects where there are more decision-makers, project managers, approvers, etc. than of actual actors.

Dare flexibility

So, specifications shall be light, evolving and easily adaptable to the requirements of development: they are only a tool in the software production, not a bible written in the stone. They must be reduced to the strictly necessary, shall only give general orientation and objectives in order not to influence the development and they must be able to evolve during the project according to the contributions of the participants.
The detailed specifications contract is no longer needed: it was indeed reassuring but inadaptable! In fact, they are always false and become a burden from which comes the classic conflict between the customer and the supplier over the delivery deadlines. They shall be only a support for a dialogue, one of the tools allowing sharing the vision of the product with the developers. The dialogue shall not end until final delivery.

Detailing is not winning

This is a teaching widely generalizable: how many hyper-detailed schedules have not been respected before they even started? How many 50 pages contracts, hardly read, will be useless until we finally realize a fundamental misunderstanding?
Details can be reassuring but may make you miss the main part. Is the 25th decimal of Pi exact? Pi = 31,41592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117067? I let you check... 
Culture of simplicity is very difficult to obtain. Simplicity is highly complicated… 1/2gt²… Maybe you remember this formula of gravity. It will be necessary to wait until Galilee and Newton to have a simple formula, which neglects the frictions of the air, but is right in general. Everybody is not Newton. But, by looking for simple and concise results, we are more likely to head toward the right direction rather than by trying to produce a block of 200 pages.

The internal customer/supplier culture

The internal customer/supplier culture has many benefits. Obviously one of them is to make internal processes much clearer and understandable. Another one is to cut the process and the project into intermediate measurable "deliverables", in the spirit of what is described above. Nevertheless, in many cases, it leads to extra costs by accumulating the “whishes” as well as by ignoring existing constraints. That’s s what we called technical complexity:
•    “The product is too expensive because those lazy guys from purchase department failed to find my piece at the right price. They did not respect their SLA!” “But of course, you might have forgotten to say them that 3m was an approximate length, and that you would have been well satisfied with the standard of 3,02m.
•    “I specified a column for months and another for dates. In fact, this is not was really need, but the project has been delayed by a month and now I cannot change it in the specifications.”
Does it remind you of something? Product co-writing by marketing and developers is essential to the overall efficiency of the project. The discussion between both teams about the interfaces (software, physical) should be bilateral negotiation rather than one-sided orders.

Iterations

Along with these flexible specifications, flexibility requires iterative processes adapted to changes: the software is developed by successive and periodic increments called sprints.
A validation is made at the end of each sprint (rather than at the end of project); something assessable is to be delivered at the end of the sprint. New features can be added to the specifications in every sprint. The previous codes can also be optimized (re-engineering). The sprint (both its progress and its result) is analyzed to prepare potential trainings, improve processes and allow the transfers of in-house knowledge.
Sprints typically last a fortnight. But of course this must be adapted to the size of the project: 2 hours to prepare a presentation, few weeks for a large industrial project. The right duration balance between seeing something concrete and measurable on one hand, and a certain stability of the expressed needs as well as a certain initiative leeway on the other hand.
These methods can truly be applied on projects as diverse as the preparation of a seminar or the launch of a new product. We saw customers asking for minute by minute schedules of a three-day seminar, although it was clear that the schedule would have to be adapted to participants’ reactions.

You may wonder why estimates are almost always wrong

You may wonder why estimates are almost always wrong. The boundaries of the project are not clearly defined yet. The difficulties are not immediately visible. The estimates are made by the management or the marketing rather than by the operational people, who usually have a better grasp of those insignificant details that might prove to be extremely time consuming. The learning from a previous project is transposed to another one, without taking into account the specificity of each of these projects…
Project hazards go against our rationality and our optimism. But technical challenges, unexpected changes, setbacks and mistakes are inevitable. If there is none, we can start wondering if the project really creates any value! One must accept unpredictability and narrow it down by focusing estimation on portions of projects and regularly reassess them.

Get-it-right-from-first-time

Then comes the question of quality. The “get-it-right-from-first-time” is always widely above the "statistical quality". If the flask of shampoo is not put out of line because a simple bar detects that the stopper is not well screwed, you can correct it immediately and you ensure a 100% quality for cheap. This would be impossible to reach with an end-of-line control.
How to translate it into acts?
•    First via the culture:  try to avoid the "we will correct that later", from the spelling mistake which would eventually be detected by a proofreader to the serious "bug" which is supposed to be detected by the validation/quality team.
•    Protect your teams from "crunch mode": the peak of hard work to meet the deadline. The increase of pace is made at the expense of quality, which can cause setbacks.
•    Set up immediate simple checks: consistency of the results, sprint validation by the client
•    Build clean and legible tools: the IT code must be as clean as the Toyota workstations! 
Having your work (plans, code, presentation, business case, etc.) challenged by your peers is a good habit: you work much better when you know you will be assessed by your peers. We deliver a clear, and correctly presented material, out of respect for our peers. The return on investment is usually immediate.

Technology and market watch

Finally, the essential of good management and personal organization books: start with what is important rather than what is urgent. People who will remain above the fray are those who have managed to remain constantly on the watch for the latest developments of the state of the art, and thus be able to choose the right model, tool, language, framework, depending on the problem to solve.
It is often tempting and comforting to spend time dealing with the current affairs, while technology and market watch or investment seem more random. However, it is usually more efficient to find a way to finally solve a problem, finding an existing solution, automating it or delegating it ("teach a man how to fish and you feed him all his life"), than resolving it on your own. While the search for a lasting solution takes two or three times the time to find a simple solution, profitability comes usually very fast.
Finally, the so-called “agile" revolution in the IT world is full of teachings for the management in many other projects. We could say that this revolution is the continuation of "quality" methods initiated in the 70s. Yet these "quality" methods were so badly implemented, and generated so many reports and reassuring manual that never left the cupboards, that we thought it was worth to (re)draw an overview of these "disruptive" methods enabling to launch a new product every week (and not every year), and without any additional cost.

Wednesday 26 October 2011

Yes, size does matter !


How big can a phone be?
What is the right size for a phone, what is the right size for a tablet?
Here are some easy facts trying to give some thresholds in this fuzzy world. Sorry if you find it too obvious, but when you look at the market, it doesn't seem so...


Which one is the iPad?
1. The one-hand test
For me, a phone should fit in one hand, and should be possible to use with one hand, the other carrying you bag, or doing whatever you want, until it is legal ;-)
So, what does it mean?
I have a "regular" hand, so I think my own experience is relevant. Large hands can add a few millimetres to my conclusions

That's OK (Galaxy S2)

The test is to be able to type a phone number or a text with you thumb, without too many errors and keeping comfortable. This probably leads to a width of 65 – 66 mm, considering current screen frames (bezel).
The trick is that it depends also on the thickness of the phone. So a Samsung Galaxy S2 with 66.1 x 8.5 mm is comparable to a HTC Sensation with 65.4 x 11.3 mm while a HTC HD2 with 67 x 11 mm is too big.
This will evolve as frames will be thinner and thickness will reduce. This should allow 4,5”, maybe 4,7 or even 5” screens.

OOH, it is too big
(Galaxy Note)
The other thing is the height: it must allow your thumb to cross comfortably the screen, in order to be able to reach the keys at the bottom as well as the notification bar on the top. This becomes more and more a constraint as screens are wider with the evolution towards qHD resolution. This is more subjective as the hand can move, but it seems we have reached the limit.
According to these considerations, the new Galaxy Note (5,3" screen) does not pass the test. I let you decide if it is enough to say it is not a phone but a (small) tablet.






2. The pocket test
Then, what is the next threshold?
The HTC Flyer in
a trouser pocket
The pocket test seems relevant. A jacket or a jean have more or less the same pocket size: 120 mm wide (considering the thickness).This corresponds also more or less of the size of a Lady hand bag. This means currently 7” screen. The new Samsung 7,7” Galaxy (133 mm) does not fit in a pocket, which is really a pity. The frames are currently very wide on tablets and we can assume that 8” screen will soon fit in 120 mm.



The HTC Flyer in
a jacket pocket
The weight is also a constraint. My wallet weight 160g, which gives a benchmark. The Flyer with 420g is heavy, and today only the Note is below 200g. Well, when will be available a 7” at 250g with a stylus???
 


3. Typing without a support

HTC Flyer
If you intent to use your tablet standing, you should better not go over 7” also. There are two ways to use it: two thumbs typing or having the tablet in one hand and typing / writing with the other. In both cases 120/130 mm width is a good threshold, even if it is more flexible than the former one.



But what is the interest to have a bigger screen?
Resolution is not really an issue: the Galaxy Note has the same resolution (800 x 1280 pixels) than a Galaxy 7,7” or 10,1”, and more pixels than an iPad 2 (768 x 1024 pixels). Sure the bigger is more comfortable, but the Note is amazing for book reading.



4. The two hands keyboard

Galaxy 10,1"
If you intend to use your tablet sitting, like a laptop, you will be interested by the possibility to type with two hands, as on a normal qwerty keyboard. Maybe you don’t believe it is convenient, in fact it is astonishing. You can really take notes during a meeting… For that, you need 190 mm to be comfortable, which corresponds for example to the Samsung 8,9” screen or the iPad.



So, is 9” the perfect size?

On one hand, I believe that a 8,5 / 9” in 16/9 format could fit in near future both above criteria: 120 mm overall width to fit in a pocket and 190 mm screen length to allow two hands typing.

On the other hand, there is no limit until a school bag size, which should allow 11 / 12” size, beginning to be really comfortable. My paper notebook weights 560 g which is exactly the 10,1" Tab right, while being much bigger, and remaining really acceptable...

Here is a view to summarise. Light green corresponds to current possibilities, dark one to next future.

As always everything depends on the way you intend to use your phone/ tablet, but in any case you need to keep in mind that resolution is as important as screen size, while physical size will determine the way your mobility is impacted.

Nota: all figures comes from the very good GSM Arena site

Wednesday 7 September 2011

The end of hardware: everything now is just software

In three years, the iPhone has made most of the multimedia portable devices obsolete. During the iPhone launch keynote, Steve Jobs presented it as “an iPod, a Phone and an Internet communicator”, making useless music players and palmtop computers (like PDA). With mobile calendar and contacts synchronization, Palm and other “organisers” were out-dated. With the camera phone, digital compact cameras are getting less and less necessary and with the integrated GPS and compass, no more need of TomTom or Garmin… Apple provides with the iPhone a universal and a must-have Swiss Army knife. The single-task digital devices can be left on the History shelves.
“Power users” are certainly not going to push aside their reflex cameras or their hi-fi system for an iPhone, but for most of the consumers, the iPhone satisfy all their digital needs.
However, smartphones now exist beyond the iPhone and all manufacturers have intensified their creativity to compete with them. Google and Android have been finally providing a real convincing alternative, developed thanks to reliable products, cheaper than the iPhone. Google benefits from the support of leading players like HTC, Samsung, LG and Sony Ericsson. Microsoft, with Windows Mobile 7, seems to be able to offer a new mobile experience really innovative and thought to be mass market, with the brand new support of Nokia. The ultimate player in this smart-phones ecosystem is a new player in the mass market: RIM. Relying on Blackberry’s fame, they provide up-market services as well as a messenger Blackberry community- targeted service that seduces many young people.
Those ones make the market today.

Beyond the smartphones success, what will be the consequences for the mobile handset industry and also for the consumer electronics and the software markets? What services consumers will be provided with?

The end of hardware: everything now is just software

Further integration of components, fall in prices and larger consumer electronics ranges are cause and effect of a mass-product success. This phenomenon has been observed in the personal computing market since the nineties and in the handset market since 2000’s.

Today, from the low end of the mobile phones range, you can find touch screens, cameras and multimedia player. Then you can find mobile phones that have the power of a five year-old top of the range computer, plus high-speed internet, a GPS, an accelerometer, and a TV output. Those devices that used to be very expensive give birth to new consumer use, not necessarily obvious:
What to do with an accelerometer? Not really measure the acceleration of your new car but rather your phone’s angle, opening up new horizons, particularly in gaming, transforming the handset in a sensitive gamepad – what made the Wii a hit.
Is touch screen useful to make a call? Not really – even opposite; but via internet access, it becomes a universal remote control, able to control your multimedia computer from your armchair, get your playlist appear on the screen and pop up any buttons you may need.
Adding up those two previous features, you can make a radio-controlled helicopter fly.
And a GPS, what for? Everybody already has one in their car, it was the product of 2008, isn’t’ it?
Yes it was, but always-on Internet access allows community applications. Not only to know if you have a friend sitting in the same bar as you, but also to know if a radar has been spotted on your road.
A camera? To make fine-art? Or rather to identify bar codes, business cards, movements or screening the environment, giving path to place recognition. The mix of screen and captor allows to share with friends pictures taken during the day. It is disposable, but it is nice.
A compass? You don’t practice orienteering race? Maybe, but it is able to know where you point the phone, so it can give you the sky map with the name of the stars you are looking at.
All those features are available to third parties thanks to platforms opened to applications developers that give all their sense to those technological headways.

What will be the effects on the consumer and on the market? The main one is that specific hardware is becoming obsolete.

For instance, in order to benefit from a radar alerting system, you had to buy a special device – the first were sold 700€ - and pay for a contract including communications and service. Now, you just have to download the application. A universal remote control used to cost more than 300€. The application for iPod Touch, iPhone and Android is free…. Today any smartphone processor already has the device; it just needs to get the right service via the applications store. But those usages could only exist thanks to reassuring unlimited contract enabling to be “always-on”: the megabit price dropped from 3€ to 0.5€ with the iPhone launch.

As well as PC has ended the specific industrial automated hardware, smartphones have shifted the issue from hardware to software: the device exists; you just need software and services to use it.

To this point, it is worth paying attention at Apple’s communication: iPod advertisements focus on game and the main part of communication (the famous keynotes) dwell on software and its partners.

Creativity is impressive: with the mix of camera sensor, GPS, Internet access, accelerator, and compass you can go way further traditional use. Augmented reality becomes possible. The screen captures our visions and the smartphone, knowing where you are and where you are looking at, add pop ups, like monuments’ names, restaurants in your neighbourhood, etc.

Even American Army uses iPhones to calculate balls’ trajectories, to translate and to broadcast pictures sent from drones… Many services suppliers gave up producing specific hardware and are just adding software layers on existing devices. Soon, on-board computers and car radios, today developed with specific hardware and software, will be on-board smartphones, playing music and video on demand… Entertaining systems in planes will follow the same way (it is already the case with Windows CE, the grandfather of portable OS). Just as there are today computers everywhere, there will be tomorrow smartphones everywhere, and all won’t place calls.


To be continued!