• [i=s] 本帖最后由 laofo 于 2014-12-2 14:53 编辑

    发信人: tigor (tigor), 信区: ITExpress 标 题: 盛大创新院 300 高手的失败 (原创) 发信站: 水木社区 (Fri Mar 21 15:17:25 2014), 站内 [累计积分奖励: 1888/0]

    1. 我来的时候 当我来到盛大的时候,是 2010 年底,那个时候浦东的空气还可以和美国的奥斯汀(这是我见过的空气最好的地方)相媲美。冬天里面都有春天的气息。盛大的发展也是生机盎然。当时我的编制在 “盛大创新院”。那个时候,“盛大创新院” 召集了全国 300 多名互联网技术高手,有一个团队还获得国际语音识别最佳大奖。后来在互联网这个圈子混熟了才发现,这 300 个人的确厉害,就他们当时的技术实力,在互联网界是一流的;他们进入创新院的时候,其待遇也是互联网界一流的。

    “盛大创新院” 的模式类似于 “创新工场”,每个人可以自由立项,可以在创新院内部、三五成群、自由组织团队。当时主导创新院的是项目评审会,负责人是陈老二,78 年生,陈老大的弟弟。这个哥们体重 200 多斤,号称创新园三大吨位之一。他是盛大游戏的真正创造者,据说是他当年自己好玩、好技术,成立了盛大游戏公司,陈天桥在看着弟弟玩游戏的时候发现这是一个机会,因此入股开始运作,才有了后来的代理吃掉韩国原创,并且成功登陆纳斯达克的传奇故事。陈老二戴着眼镜,思维清晰,但是也很尖锐、乃至尖刻;虽然是实际掌舵者,但是总是试图给人一种 “草根” 的印象。我个人还是很佩服他的才能。

    1. 我看到的

    我参加过多场项目评审会,很清楚项目评审的过程和评审关注点。项目评审会由几个评审委员组成,刚开始是陈老二,郭忠详,许式伟等人;后来成立了云计算创新院,语音创新院,多媒体创新院,搜索创新院,就把这四个院的院长也加入了进来。

    从创新院创立至其解散,历经 3 年左右,一共立项不少于 50 个,开发了一系列的产品,比如 Everbox(盛大网盘)、Web OS、麦库笔记、万能钥匙、PhoneGap 产品等等。

    其中 Everbox 是市场上最早做网盘的产品之一,原型是美国的 Dropbox,刚开始是许式伟负责,后来由云计算一位同事负责。运营初期风风火火,市场注册用户数量超过 100 万,是市场上的 Top5,仅次于金山网盘等产品。到运营的后期,才发现每个月的机器成本和流量成本好几十万,收入基本为零。因此将策略改为:只允许用户 Get,不允许 Put,也就是说只允许客户下载和浏览数据,不允许上传,其实质是放弃产品,让客户自动流失。

    WebOS 是 XXX 领导接 20 人左右的团队研发的产品。愿景宏伟,即开发一套和 Andriod 并行存在的手机端操作系统。该项目经过了立项等环节。后来在开发中途就遇到创新院调整,这个团队基本上都被挖到了华为的研究院。而 XXX 担任华为的高层。

    大数据项目是一个包含了 500 台高配置服务器集群、几十个技术高手组成的前沿性项目,可以分析来自文学、酷 6、盛大广告等的用户行为、趋势、浏览量分布等数据,数据量高达几十 P。它吃进去的是草:各种没有规则的 Log 日志,挤出来的是牛奶:是各种漂亮的分析报告。可以用于决策。

    麦库笔记是一款 “小” 产品,可以在线存储用户的 “笔记”,包括我们的只言片语、计划、个人图片、等等。其后台数据部分是存储在盛大云的云存储中。该产品出来之后也颇受欢迎。我周围的同事和朋友经常用它来做 “网络笔记”。这款产品目前还在运营,不过团队独立出来了。

    还有一系列的产品,比如类 PhoneGap 产品,可以为各种 App 测试提供无数终端环境,当时创新院有一个包含几十款手机的手机库,通过该产品开发者可以将他们的 App 安装到这些终端上进行调试和测试。比如万能钥匙,通过该产品用户可以方便分享和获取各地的 Wifi 密码,这虽然有一些道德风险,但是很受用户欢迎。

    1. 对个人而言是成功的

    对于个人而言,我认为这个组织是成功的。因为它让每个人都学习到了很多。我在此前的任何公司也从没有见到过如此开放、高技术含量的讨论与评审会议。也因此,我发现这 300 人在离开盛大之后他们的发展都还不错,有一些成立了创业团体、获得了资本的青睐;也有一些去其它公司担任高层。

    比如创新院出来的许式伟,出来之后在 2011 年创立了七牛存储,担任 CEO,经过短短 2 年的发展,目前已经扩展到几十人,其存储产品是市场的领先者之一,已经至少获得了 B 轮融资。估值几千万。

    比如季昕华,出来之后 2011 年创立了 UCloud,担任 CEO。目前已经获得 B 轮 1000 万美元以上的融资,估值几个亿。

    1. 对组织而言是失败的,怎么失败的?

    但是就组织使命而言,这个组织是失败的,因为并没有孕育出伟大的产品。为什么呢?

    首先,从这个组织的设置之初就犯了一个错误:只关注产品的创新,没有关注产品的运营。当时在评审会的时候,我经常听陈大年说的一句话就是 “赚钱的事情先不要考虑”,我当时感觉他对于自己的财力非常自信。也许他并不是对运营不重视,但是,他这一句话或者这个思维定式,的确让开发者只关注于产品研发,而漠视了如何赢得用户欢心这个极其艰苦的工作。

    其实,项目之间缺少联动。每个项目都标新立异,概念前沿,技术先进。Everbox 是市面上最早做网络存储的前三家公司之一;麦库笔记、WebOS、万能钥匙都抓住了未来几年的市场需求。但是,这些产品之间可能没有任何关系,即便有联系也是 “碰巧” 的;也没有人去分析如何将这些产品组合起来形成一个拳头。

    再次,人员结构不合理。300 个技术高手,每个人都能够独挡一面,对技术和创新有着执着的追求。但是有多少人可以耐心的解释客户遇到的问题?有多少人对市场、对于客户有深入的研究?有多少人愿意放弃自己的技术控去做一些用户很关心、但是丝毫没有技术含量的事情?对于有着技术追求的人来说,这些重要的事情都是他们既不愿意躬身而为、也无法做到的。

    最后,投入的时间节点错误。前期研发投入太大,300 个高手的工资是一个比较庞大的数字,每年的薪水、软硬件环境、吃喝拉撒不少于 1.5 个亿。刚成立创新院的时候可能抱着美好的想法,在研发成功产品之后即刻可以有爆发式的增长。但是投入市场之后才发现,投入这么多,“收入还不如一款新游戏在新加坡的收入”!(陈天桥曾经拿一个项目的收入和一款游戏的利润来比较,原话)因此,需要真正投入的时候,又因为私有化盛大网络、资金链紧张而变得真真计较,客观上导致了没有投入,没有投入导致更没有收入,直到一个个团队分崩离析。 [b] 以上四个原因,汇总到一起,其实就是对于运营的忽视。如果关注运营,那么首先就要关注市场、关注用户群体、关注盈利模式;也必然会考虑不同项目之间如何组合而带来更大的流量和品牌口碑;也必然关注应该雇佣多少产品人员、运营人员、技术维护、售前售后等 “边缘” 人才;也必然会让前期和后期的投入更合理、更能支撑业务的发展。 [/b]

    1. 怎么才能避免失败? [color=Red] 纵观成功的公司和产品,必然是成功运营的产品。就如同盛大游戏当年在陈大年的手里,如果没有陈天桥当时的高瞻远瞩,协调 IDC 资源、开创收费模式、建立代理渠道、到最后收购韩国原创公司,就没有陈天桥的曾经辉煌。其实当盛大游戏成功的时候,他们的产品还根本不是自己开发的呢,自己还只是一个代理公司!可见,即便不是自己的产品,通过运营也可以如此成功。

    然而在创新院,我们从一个极端走向了另一个极端,只管产品,却忽略了运营。为什么呢?按理,陈天桥和陈大年不可能不清楚运营的重要性。个人以为,这是因为每个产品的运营都是独一无二的,这是由产品的特性、客户属性、市场环境决定的,如果个人意志和这些趋势相左就无法成功;如果相中就可以成功。在我看来,陈天桥尤其是陈大年在盛大游戏成功之后,认为自己的经验和财富可以运用到其它产品;认为游戏模式可以复制于其它产品。因此,只要大家 “创新产品”,后面的事情可以用钱、用已有的经验和模式搞定。这种有意或者无意的思维倾向,也许是失败的精神层面的原因。

    纵观成功的产品,其实大都是 “运营” 成功的产品。

    首先,运营必须第一位的。产品本身不能成为第一位。产品就如同一块铁,如果运营者是一个普通人,那它就是一块废铁;如果运营者是一个厨师,那就可以把它打造成菜刀;如果运营者是一个将军,那就可以把它打造成弓箭和刀戬。作为组织,必须时刻考虑如何运营产品;而不是将产品作为目标本身。否则,就将舍本逐末。

    我们看看:马云是一个高瞻远瞩的运营者;周鸿祎是一个善于在逆境中挑战强势对手的运营者;马化腾则是大浪淘沙、广撒网而捞金鱼的运营者。想想看,在只有几十个商户的时候,淘宝网需要什么技术含量吗?腾讯的 QQ 软件当年是在我们的研究生课程当中老师还当做实验来做的;360 杀毒软件之前已经造就有瑞星、金山了,技术而言毫无优势。

    这些产品成功的本身是因为这些 CEO 们聪明的 “运营” 了产品,他们使用产品、服务、品牌等一系列手段赢取了客户的心。

    在互联网界,很多时候看起来是 “产品” 爆发成功,实质大都是 “运营” 的成功。

    其次,运营必须是 CEO 任务。如果能够协调人力、财力、物力的 CEO 忽视运营,产品也没有可能成功。皇帝不急,太监急是没有用的,因为太监只能协调到几个宫女,只有皇帝才能封土地、赏大臣。这类例子在盛大集团的失败项目中是明显的,当领导不愿意或者不能够协调资源的时候,员工只能看着。

    再次,运营必须有长远和准确的眼光。很多人说过,商业本质上是眼光的较量。在互联网上尤其如此。运营者的头脑里应该清晰的装着客户的使用场景和使用感受。只有这样,他们才能够坚信自己的价值,从而忽略短期的痛苦。

    最后,运营务必综合考虑人力、财力、物力和外力各种因素的影响。找到有利的因素,避免不利的因素。

    还有一些,比如运营必须脚踏实地,认识自己和自己的组织;必须一步一步的来,不可太快、也不可太慢。

    当然,成功还有很多因素需要考虑;在这里,只是分析了失败的原因,这也是避免失败的必要条件。

  • INITIAL THOUGHTS ON THE ROCKET ANNOUNCEMENT

    When Docker launched 18 months ago, we set out on a mission to build “the button” that enables any application to instantly and consistently run on any server anywhere.

    Our first task was to define a standard container format that would let any application get packaged into a lightweight container that could run on any infrastructure.

    With a lot of hard work and participation across the community, Docker capabilities grew, we were able to make the same Docker container run successfully across all major infrastructures, and we built a robust ecosystem that now includes:

    over 700 contributors (95% of whom do not work for Docker, Inc.) over 65,000 free Dockerized languages, frameworks, and applications (services) support by every major DevOps tool, every major public cloud, and every major operating system a robust ecosystem of third-party tools built on top of Docker. There are now over 18,000 projects in GitHub with Docker in the title. over 175 Docker meetup groups in over 40 countries millions of Docker users Along the way, we built a robust, open design and governance structure, to enable users, vendors, and contributors to help guide the direction of the project.

    For the past nine months, we have articulated a vision of Docker that extends beyond a single container. While Docker continues to define a single container format, it is clear that our users and the vast majority of contributors and vendors want Docker to enable distributed applications consisting of multiple, discrete containers running across multiple hosts.

    We think it would be a shame if the clean, open interfaces, anywhere portability, and robust set of ecosystem tools that exist for single Docker container applications were lost when we went to a world of multiple container, distributed applications. As a result, we have been promoting the concept of a more comprehensive set of orchestration services that cover functionality like networking, scheduling, composition, clustering, etc. While more detail will be provided at the DockerCon conference this week in Amsterdam, a few design points are worth noting:

    Multi-container orchestration capabilities–as with the container standard itself–should be be created through an open design process with collaboration and feedback from a community and ecosystem. These orchestration functions should be delivered as open APIs, developed in the open using the open design process These capabilities should not be monolithic. Individuals should be free to use, modify, or not use these services and their higher level APIs These capabilities and APIs should support plug-ins, so that people can choose the scheduling, clustering, logging, or other services that work best for them, without sacrificing portability, the ability to work across infrastructures, or the ability to leverage the 65K+ Dockerized apps or 18K+ tools that work with Docker. This plug-in model has worked exceptionally well for execution engines (e.g., libcontainer, LXC) and file systems (BTRFS, device mapper, AUFS, XFS). Expect to see more in our announcements this week. Of course, different people have different views of how open source projects should develop. As noted above, the overwhelming majority of users, the vast majority of contributors, and the vast majority of ecosystem vendors want the project to support standard, multi-Docker container distributed applications. Many vendors, large and small, both welcome and are contributing to this effort. (For more on open governance in Docker, please see this post.)

    We are committed to the ecosystem of users, vendors, and contributors. Whether people add value in the form of contributions to Docker, as independent projects that build upon the Docker container format, as plug-ins to the Docker orchestration APIs, or otherwise, we hope that the open, layered approach provides options for all. Nonetheless a small number of vendors disagree with this direction. Some have expressed their concern that, as Docker expands its scope, there may be less room for them to create differentiated, value-added offerings. In some cases, these vendors want to create orchestration solutions that are tailored for their particular infrastructure or offerings, and do not welcome the notion of portability. In some cases, of course, there are technical or philosophical differences, which appears to be the case with the recent announcement regarding Rocket. We hope to address some of the technical arguments posed by the Rocket project in a subsequent post.

    For now, we want to emphasize that this is all part of a healthy, open source process. As Docker is open source, and Apache-based, people are free to use, modify, or adapt Docker for their own purposes. They are free to use Docker as a single container format. They are free to build higher level services that plug in to Docker. And, of course, they are free to promote the notion of an alternative standard, as the folks behind Rocket have chosen to do.

    While we disagree with some of the arguments and questionable rhetoric and timing of the Rocket announcement, we hope that we can all continue to be guided by what is best for users and developers.

    http://blog.docker.com/2014/12/initial-thoughts-on-the-rocket-announcement/

  • Facebook 是如何做自动化测试的?

    HuangLi@SDET

    最近 Quora 上有个讨论,原意是:“facebook 是如何做自动化测试的,他们是怎样测试才能保证每周的升级都可以不出差错的呢?”

    来自 Facebook 的 Steven Grimm 很好地回答了这个问题,觉得还不错,这里以第一人称翻译了一下。

    ▲对于 PHP 的代码,我们写了非常多的基于 PHPUnit 测试框架的测试类,这些测试类覆盖范围比较大,从简单的判读真假的单元测试到大规模的后端服务的集成测试。开发人员把运行这些基于 PHPUnit 的测试用例作为他们工作中的一部分,同时这些用例也在一些专用的设备上不停地被运行(注:持续集成模式)。当开发人员对一些代码做了比较大的修改时,在开发机器上的自动化工具会运行这些测试用例的同时也会生成相应的代码覆盖率数据,对于需要提交到代码库的 diff,在做代码 review 的时候回自动地产生一份带有覆盖率的测试报告。

    ▲对于前端的代码,我们使用 Waitir(注:Waitir 是前端 UI 的自动化测试框架)做了基于浏览器的界面自动化测试。这些测试用例涵盖了网站页面的功能,特别是针对隐私方面,比如:“用户 X 发布了 Y,而 Y 应该/不应该被用户 Z 看到”,有着大量的基于浏览器级别的这种用例。(这些隐私规则当然也会使用一些更低级别的方法被测试到,但是这些规则的实现是必须要严格执行的,并有着非常高的优先级,因此这部分必须要有足够的测试用例来覆盖)

    ▲除了一些使用 watir 的全自动化用例以外,我们也有一些半自动化的测试。这些测试也使用了 waitir 技术,这样可以使一些表格填充或者点击 button 来完成整改界面上的流程的测试不太单调乏味,而且我们可以很清楚地检查和验证当前的步骤或流程是否正确合理。

    ▲我们也在尝试开始使用 JSSpec 去做一些 JavaScript 代码的单元测试,但当前也是刚刚开始做。(注:JSSpec 是 JavaScript 单元测试框架)

    ▲对于后端服务的测试,根据不同的服务特性我们采用了许多不同的测试框架与方法。对于一些需要开源发布的项目,我们会使用开源的测试框架,像 Boost 和 JUnit 测试框架(注:Boost 是针对 C++/JUnit 是针对 Java 的测试框架);对于另外一些项目,可能永远都不会发布到外界,我们就是使用内部开发的可以很紧密地与我们 build 系统集成在一起的 C++ 测试框架。还有少数项目会使用项目级别的测试工具。多数后端服务的测试都会紧紧地和持续集成/Build 系统结合在一起,这些持续集成的 build 系统会不停地针对源代码自动地运行测试用例并生成测试结果,测试结果在存储在数据库的同时会发送到通知系统中去。

    ▲HipHop 有一套类似的持续集成系统,HipHop 的单元测试和所有基于 PHPUnit 的测试都会被运行。所有的这些测试结果会和基于普通的 PHP 解释器的结果做对比,从而可以看到不同 PHP 上的行为的不同;(注:HipHop for PHP 是 Facebook 的 PHP 项目)

    Facebook 的测试工具将测试结果存储在数据库的同时会发送一份通知邮件,这个邮件会包含执行失败的信息并且邮件的接收范围是开发同学可以自己调整的。(例如,你可以选择只有在测试连续失败一段时候的时候才接收到通知邮件,或者当一个用力失败的时候立刻收到通知)。在浏览器 UI 上,测试结果和 缺陷/开发任务跟踪系统会结合在一起,可以很容易的将测试失败与开发任务关联起来。

    测试中一个非常重要的现象是 “导致阻塞”,也就是一个测试用例失败有可能会阻止发布(在 Facebook,有发布工程师会来评估是否可以将带有问题的代码发布到生产环境,发布工程师在必要的情况下会得到授权去阻止产品的发布)。阻止产品发布上线的事情是被认为是非常严重的问题,因为在 Facebook 大家对于这种快速发布的模式是深深引以为豪的。

    我所在的团队是测试工程部门,主要职责是打造通用基础工具,这些工具会被上述的所有人用到,同时我们也在维护测试框架,像 PHPUnit 和 Watir。Facebook 没有专职的测试团队,所有的工程师都需要为他们的代码写自动化测试用例,并维护这些测试用例,保证产品代码改变的同时这些测试代码可以正确地运行。

    Facebook 的测试还处于一个初期起步尝试阶段,上面的介绍都只是我们在当前运行的方法而已。

    英文原文:What kind of automated testing does Facebook do?

  • [i=s] 本帖最后由 laofo 于 2014-11-25 15:11 编辑

    Ansible and Salt: A detailed comparison

    The short version is that Ansible and Salt are both awesome. Seriously. If you are lucky enough to work for a company that lets you use either one of them, you’re going to have a great time. Having said that, there are some key differences in the way they each attempt to solve the problems that are faced by modern sysadmins and developers—and what fun would one of these comparisons be if I didn’t tell you my preference at the end? But first, some backstory.

    If you haven’t heard of them before, Ansible and Salt are frameworks that let you automate various system tasks. The biggest advantage that they have relative to other solutions like Chef and Puppet is that they are capable of handling not only the initial setup and provisioning of a server, but also application deployment, and command execution. This means you don’t need to augment them with other tools like Capistrano, Fabric or Func.

    Both Ansible and Salt are capable of taking you from a blank server to a fully-functional application; they can maintain code updates to that application over time; they can quickly run arbitrary commands on an ad-hoc basis; and they can do all of this across hundreds or thousands of different machines. Oh, and they are both built around the concept of using the YAML serialization format to represent configuration and execute commands. This makes them far more pleasant to work with than the competition, and their concise syntax allows you to use the resulting configuration as a form of documentation that non-programmers can easily understand.

    As an experiment, I decided to write a collection of Ansible Roles and Salt States to perform the same set of tasks and configure a brand new Ubuntu 12.04.2 LTS server with the following:

    Ruby + the Falcon patch for improved performance Nginx + Passenger Bundler

    A few other crucial packages like git and NTP

    I also wanted Ansible and Salt to handle deploying my simple Sinatra test application. This meant that they also needed to:

    [list] [] Create a ‘deploy’ user that the application files would belong to [] Reconfigure OpenSSH to only allow access via SSH keys [] Add my SSH public key to the deploy user’s authorized_keys file [] Set up and enable an Nginx vhost [] Create all necessary application directories [] Use git to checkout the latest revision of the application’s codebase [] Create required symlinks [] Use bundler to install all Gem dependencies [*] Restart Nginx and be completely ready to go [/list] Mission accomplished. Here is an open source Ansible Playbook that achieves the above, and an open source collection of Salt States that do the same:

    mazer-rackham - Sample Ansible Playbook for Rack applications

    salt-rack - Sample Salt States for Rack applications

    To make comparison easy, I did my best to match the comments in the Salt States with the corresponding Ansible names. Now, let’s talk about some differences.

    [b] Speed[/b]

    Salt is fast. 0mq is an incredibly slick transport layer. It is very satisfying to send commands and get instantaneous feedback once you have a Salt master connected to several minions. Out of the box, Salt is much faster than Ansible because Ansible relies on SSH as its transport layer by default. However, Ansible also supports what it calls Fireball Mode which uses SSH to bootstrap an ephemeral 0mq daemon. In my testing, I couldn’t see any difference whatsoever between Ansible and Salt when they were both using 0mq, though the initial Ansible Fireball bootstrap process can still take a bit because it happens over SSH.

    If you have a workflow where you will routinely need to send simultaneous commands to hundreds upon hundreds of machines, and you cannot afford to wait for Ansible to set up Fireball Mode over SSH, then Salt will be a better fit. Realistically, both Ansible and Salt will probably do just fine for your needs. They are both actively being used in supercomputing clusters with thousands of nodes. Ansible’s default SSH transport is also plenty fast and can easily get the job done across hundreds of servers as long as you’re comfortable with rolling updates.

    [b] Security[/b] 0mq does not natively support encryption, so Salt includes its own AES implementation that it uses to protect its payloads. Recently, a flaw was discovered in this code along with several other remote vulnerabilities. Ansible is largely immune to such issues because its default configuration uses standard SSH and does not require any daemons to be running on the remote servers aside from OpenSSH (and I don’t think there is a single package in the entire world that I trust more). Regarding Fireball Mode, Ansible’s 0mq AES encryption is done using Keyczar instead of a homegrown solution, is not enabled by default, and its connections are ephemeral.

    All of this drastically reduces Ansible’s attack surface, but it isn’t exactly perfect when it comes to security. Ansible uses paramiko for its SSH connections by default. Paramiko is a fine library with a great track record, but Ansible ships with an overly-permissive configuration that won’t warn users if a host’s key changes, nor does it prompt for confirmation the first time that a key is seen. Additionally, the documentation contains some pretty bad advice to turn off StrictHostKeyChecking in order to make remote Mercurial and git checkouts a little more seamless. Fortunately, these issues are easy to work around by using Ansible’s binary SSH option (which will check host keys) and by seeding servers with a proper known_hosts file.

    Still, you are far more likely to be exploited due to a full-blown remote vulnerability than you are to be exploited due to a MITM attack (and the compromise of a Salt master server is equivalent to gaining full root access on all of the minions that connect to it). Despite its lax policy toward host key verification, Ansible remains the clear winner here.

    Update - On July 5, 2013, Ansible 1.2.1 was released. SSH host keys are now checked, and it is more secure than ever before.

    [b] System Impact[/b] Salt brings in a lot of dependencies. These dependencies must be installed on every machine it is used on, regardless of whether the system is a master or minion. Because of this, you will likely want to enable the Salt Stack apt repository to make installing Salt and keeping it up-to-date as simple as possible. The master and minions will all be running persistent daemons that enable Salt to perform its magic.

    Ansible’s dependencies are comparatively minimal and only need to be installed on the systems that will be running the ansible and ansible-playbook commands. Its only remote dependency is a Python interpreter, and that comes with almost every Linux distribution by default. Ansible doesn’t leave any traces of its existence on remote systems after it finishes running a playbook, and the daemon-less approach allows it to easily run from a local git checkout.

    Some people might find these distinctions important, but the differences are largely academic when you’re looking at them in terms of system impact (and persistent daemons are a feature if you’re going to be sending a lot of commands). In my experience, the Salt daemons are very lightweight and well-behaved when they are running. The distinction between daemons vs. daemon-less is important for other reasons, however.

    [b] Maintenance[/b] Ansible is dramatically easier to maintain. It has been less than a month and a half since I first started using Salt and during that time there have been five releases. To be fair, I would only consider one of those upgrades to be mandatory and that’s the aforementioned 0.15.1 security release. While it’s true that you can easily use Salt to run upgrades across all of your minions, and upgrading the master is a simple apt-get command away, the fact remains that the daemons are an extra layer of moving parts that you simply don’t have to deal with in Ansible.

    The lack of daemons also makes Ansible easier to use on existing servers. Plus, if you want to use the latest version, you just upgrade it in one location and you’re done. The Ansible CHANGELOG is also regularly refreshed so that even if you’re running from the development branch (which a lot of people do) it’s still easy to keep track of what is going on.

    In comparison, I think that it would be wonderful if the Salt project did a better job of publishing Release Notes for their minor updates. As of now, short of digging through the git logs, there is no way to determine what happened in Salt 0.15.2 or 0.15.3. Presumably they contain bug fixes, but the sysadmin in me likes to know what I am getting into before I perform upgrades—especially upgrades to something like Salt that is running as root and that can easily become a core part of your infrastructure.

    [b] Execution Order and Dependency Chains[/b] Salt and Ansible take wildly different approaches to controlling execution order.

    Ordering Salt States is a decidedly complicated affair because it necessitates defining tasks in terms of their requirements. You have several options when working with requisites. You can use require statements, require_in statements, or a mixture of the two. The Salt documentation for requisites describes the difference like this:

    Requisite_in statements are the opposite [of requisite statements]. Instead of saying “I depend on something”, requisite_ins say “Someone depends on me”.

    Well, you can depend on this to be confusing. Given my options, I found it more intuitive to think about things in terms of what each individual state required and therefore eschewed the usage of requisite_in statements. I had to use a lot of requisite statements in order to get Salt to execute things in the correct order. My salt-rack States contain 34 requisite statements, and that excludes the initial ‘require:’ line that preceded them in each state. They also contain 5 include statements that are necessary in order for a state file to require states that are located elsewhere.

    Perhaps the most aggravating example of requisite issues was when I was upgrading to Passenger 4 from 3.0.19. After making the simple changes to accommodate this, I ran state.highstate on a couple of new minions. It failed the first time, but would succeed if I ran it a second time. So right away I knew that I was dealing with an execution order issue that likely had something to do with a task in the ruby-falcon state file not executing when it should.

    It turns out that Passenger had made a subtle change to their install script that caused it to shell out to run a rake task and shell out to run a ruby command at two separate points in the installation process. I discovered this by digging through the Passenger source around where Salt’s output was indicating the install process was exiting. The fix was simply to add the final two lines to this state:

    You know exactly which handlers are going to be notified if the codebase changes, and you can read in plain English what those handlers will do because they have been given descriptive names.

    [b] Conclusion[/b] If you’ve made it this far, it’s probably no surprise to hear that I prefer Ansible at this time. I really meant what I said at the beginning though. Salt and Ansible are both pretty great and it amounts to an embarrassment of riches that I was even able to do a writeup like this. We are fortunate to live in a world where they both exist. Aside from Ansible, I would still rather use Salt than anything else. I plan on trying to stay current with both of them. Things are moving fast and I am excited to see where both projects are in a year from now.

    I encourage you to check out my Ansible Playbook and my Salt States and see which you prefer. You really can’t go wrong either way.

    https://missingm.co/2013/06/ansible-and-salt-a-detailed-comparison/

  • 在其他地方看到摘抄的一段。

  • Ansible vs Chef

    This is a tale of a newcomer vs a relative oldie in the Configuration Management (CM) arena. Both are tools to help the sysadmin or devops professional to better manage large numbers of servers. They excel at stuff like repetitive task automation, simultaneous deployment of apps and packages to a group of servers, or configuration and provisioning of new servers from scratch.

    What They Are, How They Work

    Chef was originally released in 2009 - in the world of CM tools, that’s an eternity ago. It is supported by parent sponsor Opscode, and is frequently compared and contrasted to that other old-timer CM tool Puppet. Like Puppet, Chef is also written in Ruby, and its CLI also uses a Ruby-based DSL. Chef utilizes a master-agent model, and in addition to a master server, a Chef installation also requires a workstation to control the master. The agents can be installed from the workstation using the ‘knife’ tool that uses SSH for deployment, easing the installation burden. From there managed nodes authenticate with the master through certificates. Chef doesn't yet have a well-formed push feature, though beta code is available to do this. But in the meantime the implication is that agents must be configured to check in with the master periodically and instantaneous master-to-agent rollout of changes isn't really possible. Continuing with the kitchen metaphor (see ‘knife’ above), Chef configs are packaged into JSON files called ‘recipes’. Also, the software can run in either a client-server or in a standalone mode called ‘Chef-solo’.

    Ansible is quite different from Chef. It is similar to another upstart, Salt, than to the old boys Chef and Puppet. It was developed and first released in early 2012 by the parent company AnsibleWorks, and is starting to gain a dedicated following despite its youth and untested-ness. It is written in Python and only requires the Python libraries to be present on the servers to be configured, which anyway is the default on almost all Linux distros today. Ansible’s USP’s are its light weight, relative ease of use and speed of deployment compared to other CM tools. For example you don’t need to learn Ruby - Ansible packages all commands into YAML modules called playbooks - as long as your preferred language can output JSON modules, you’re good to go. Ansible also does away with the need for agents; all master-agent communication is handled either via standard SSH commands, or the Paramiko module which provides a Python interface to SSH2. An added bonus is the SSH’s excellent inbuilt security.

    Support, Performance, Ease of Use

    Chef is an older product, so its documentation is better than Ansible’s. That said, there are complaints by many new to Chef that it is quite confusing to learn compared to the blissfully simple Ansible. Chef offers support for Linux, *nix, and Windows. The browser-based GUI is quite good (again, no surprise considering it’s been around for a few years), although it’s not as complete as Puppet’s, lacking features like reporting and advanced config options. All in all, Chef’s relative maturity means it may appeal to corporations, who place a premium on stability, more than individuals.

    IT guys are famous for avoiding documenting anything, so it’s no surprise that Ansible’s documentation is still a weak point. This, however, is somewhat mitigated by how easy it is to learn. Ansible is currently only available for Linux and Unix, and its GUI is terrible compared to Chef’s – it’s not even synced to the CLI, so you may occasionally find that the GUI and CLI give different results of a query. Ansible’s agent-less push-mode using the ZeroMq implementation at the transport layer means quick deployment and very low performance overhead; the caveat is that it’s just not as flexible and powerful as using agents.

    GuardRail Integrates Beautifully With Ansible

    Conclusion

    First off, any admin or devops will be mighty glad to have such tools in their corner; just a few years ago there was much less choice in this field. Choosing either of them is a win, and your life will be richer and easier for it.

    That said, if you must choose between them, consider your own needs carefully first and weigh them against what each solution offers. You can use the comparison table below to compare Chef and Ansible. [attach] 2507[/attach]

    References

    http://probably.co.uk/puppet-vs-chef-vs-ansible.html http://benscofield.com/on-ansible/ http://www.infoworld.com/d/data-center/review-puppet-vs-chef-vs-ansible-vs-salt-231308?page=0,1

  • [i=s] 本帖最后由 laofo 于 2014-11-25 23:14 编辑

    Puppet or Chef? Ansible or Salt?

    [b] Whereas Puppet and Chef will appeal to developers and development-oriented shops, Salt and Ansible are much more attuned to the needs of system administrators. Ansible's simple interface and usability fit right into the sys admin mindset, and in a shop with lots of Linux and Unix systems, Ansible is quick and easy to run right out of the gate.

    Salt is the sleekest and most robust of the four, and like Ansible it will resonate with sys admins. Highly scalable and quite capable, Salt is hamstrung only by the Web UI.

    Puppet is the most mature and probably the most approachable of the four from a usability standpoint, though a solid knowledge of Ruby is highly recommended. Puppet is not as streamlined as Ansible or Salt, and its configuration can get Byzantine at times. Puppet is the safest bet for heterogeneous environments, but you may find Ansible or Salt to be a better fit in a larger or more homogenous infrastructure.

    Chef has a stable and well-designed layout, and while it's not quite up to the level of Puppet in terms of raw features, it's a very capable solution. Chef may pose the most difficult learning curve to administrators who lack significant programming experience, but it could be the most natural fit for development-minded admins and development shops. [/b] For more in-depth looks at these tools, read the full reviews:

    Review: Ansible orchestration is a veteran Unix admin's dream http://www.infoworld.com/d/data-center/review-ansible-orchestration-veteran-unix-admins-dream-228509?source=rs Review: Chef cooks up configuration management http://www.infoworld.com/d/data-center/review-chef-cooks-configuration-management-224178?source=rs Review: Puppet 3.0 pulls more strings http://www.infoworld.com/d/data-center/review-puppet-enterprise-30-pulls-more-strings-222737?source=rs Review: Salt keeps server automation simple http://www.infoworld.com/d/data-center/review-salt-keeps-server-automation-simple-228936?source=rs

  • [i=s] 本帖最后由 laofo 于 2014-11-25 23:07 编辑

    A Web UI is available for Ansible in the form of AnsibleWorks AWX, but AWX doesn't tie directly into the CLI. This means that configuration elements present in the CLI will not appear in the Web UI unless a synchronization pass is run. You can use that included synchronization tool to keep them in line, but it will need to be run on a scheduled basis. The Web UI itself is functional, but is not as complete as the CLI, so you will find yourself working between the two in general use, or just using the CLI.

    SaltStack Enterprise

    [b] Salt is similar to Ansible in that it's a CLI-based tool that utilizes a push method of client communication. It can be installed through Git or through the package management system on masters and clients. Clients will make a request of a master server, which when accepted on the master allows that minion to be controlled.[/b]

    Salt can communicate with clients through general SSH, but the scalability is greatly enhanced through the use of client agents called minions. Also, Salt includes an asynchronous file server to speed up file serving to minions, which is all part of Salt's focus on high scalability.

    As with Ansible, you can issue commands to minions directly from the CLI, such as to start services or install packages, or you can use YAML configuration files, called "states," to handle more complex tasks. There are also "pillars," which are centrally located sets of data that states can access while running.

    You can request configuration information -- such as kernel version or network interface details -- from minions directly from the CLI. Minions can be delineated through the use of inventory elements, called "grains," which makes it easy to issue commands to a particular type of server without relying on configured groups. For instance, in a single CLI direction, you could target every minion that is running a particular kernel version. [b] Like Puppet, Chef, and Ansible, Salt offers a large number of modules to address specific software, operating systems, and cloud services. Custom modules can be written in Python or PyDSL. Salt does offer Windows management as well as Unix, but is more at home with Unix and Linux systems. [/b] Salt's Web UI, Halite, is very new and not as complete as the Web UIs for the other systems. It offers views of event logs and minion status, and has the ability to run commands on minions, but little else. This tool is under active development and promises to improve significantly, but for the time being it's bare-bones and buggy.

    [b] Salt's biggest advantage is its scalability and resiliency.[/b] You can have multiple levels of masters, resulting in a tiered arrangement that both distributes load and increases redundancy. Upstream masters can control downstream masters and their minions. Another benefit is the peering system that allows minions to ask questions of masters, which can then derive answers from other servers to complete the picture. This can be handy if data needs to be looked up in a real-time database in order to complete a configuration of a minion.

  • [i=s] 本帖最后由 laofo 于 2014-11-25 23:02 编辑

    Puppet Enterprise has the most complete Web UI of the bunch, allowing for real-time control of managed nodes using prebuilt modules and cookbooks present on the master servers. The Web UI works well for management, but does not allow for much configuration of modules. The reporting tools are well developed, providing deep details on how agents are behaving and what changes have been made.

    Enterprise Chef

    Chef is similar to Puppet in terms of overall concept, in that there's a master server and agents installed on managed nodes, but[b] it differs in actual deployment. In addition to a master server, a Chef installation also requires a workstation to control the master. The agents can be installed from the workstation using the knife tool that uses SSH for deployment, easing the installation burden. Thereafter, managed nodes authenticate with the master through the use of certificates.[/b]

    Configuration of Chef revolves around Git, so knowledge of how Git works is a prerequisite for Chef operation. [b] Like Puppet, Chef is based on Ruby, so knowledge of Ruby is also required.[/b] As with Puppet, modules can be downloaded or written from scratch, and deployed to managed nodes following required configuration.

    [b] Unlike Puppet, Chef doesn't yet have a well-formed push feature, though beta code is available. This means that agents will need to be configured to check in with the master periodically, and immediate application of changes isn't really possible. [/b] The Web UI for Enterprise Chef is functional, but does not provide the ability to modify configurations. It is not as complete as the Web UI for Puppet Enterprise, lacking in reporting and other features, but allows for inventory control and node organization. [b] Like Puppet, Chef benefits from a large collection of modules and configuration recipes, and those rely heavily on Ruby. For that reason, Chef is well-suited to development-centric infrastructures. [/b] AnsibleWorks Ansible

    [b] Ansible is much more similar to Salt than to either Puppet or Chef.The focus of Ansible is to be streamlined and fast, and to require no node agent installation. Thus, Ansible performs all functions over SSH. Ansible is built on Python, in contrast to the Ruby foundation of Puppet and Chef. [/b]

    Installation of Ansible can be done through a Git repository clone to an Ansible master server. Following that, nodes to be managed are added to the Ansible configuration, and SSH authorized keys are appended to each node, related to the user that Ansible will run under. Once this is done, the Ansible master server can communicate with the node via SSH and perform all required tasks. In order to function with operating systems or distributions that do not allow root SSH access by default, Ansible accepts sudo credentials in order to run commands as root on those systems.

    Ansible can use Paramiko, a Python SSH2 implementation, or standard SSH for communications, but there's also an accelerate mode that allows for faster and larger-scale communication.

    Ansible can be run from the command line without the use of configuration files for simple tasks, such as making sure a service is running, or to trigger updates and reboots. For more complex tasks, Ansible configuration is handled via YAML syntax in configuration files called Playbooks. Playbooks can also use templates to extend their functionality. [b] Ansible has a collection of modules that can be used to manage various systems as well as cloud infrastructure such as Amazon EC2 and OpenStack. Custom Ansible modules can be written in just about any language, as long as the output of the module is valid JSON.[/b]

  • 薪资谈判的六个秘密 at 2014年11月24日

    其实应聘信息的那张表格,你是可以省去的。

    把姓名、手机、邮箱写清楚了就可以直接去面试了。

    写的再多,不合适一点用处也没有;合适的话,你那些信息还得重新填,还得看原件、交复印件。

  • @ 研究者 July:总结下来是:

    1. 团队上下,自我驱动,团队核心成员互补,
    2. 做正确的事情,
    3. 谁决策谁负责,总裁有否决权,但一年只能用 3 次,
    4. 要学会做减法,
    5. 不断学习,自己团队做出来的产品自己要经常看。
  • 尽最大的努力去了解代码。

  • [i=s] 本帖最后由 laofo 于 2014-11-23 09:00 编辑

    Done.

    遇到不少高人,很多谈话,让我受益匪浅。

    最后感谢大家参加聚会,谢谢。

  • 中午 11:30 左右到就可以

  • 很不错的文章,推荐阅读

  • 私有云管理工具大比拼 at 2014年11月20日

    按需部署 理智对待私有云

    云计算博得企业 IT 部门的青睐是因为它能通过互联网交付可共享的廉价资源,但是在实践中,许多企业却选择构建他们自己的云环境。笔者猜想在很多情况下,做出这种决定并非出于核心的技术原因而是来自企业高管控制的需要,他们需要在云战略上做好管理的准备。同时许多企业正在使用公共云,虽然他们没有这方面的业务,至少在某些案例上是这样。你该如何知道什么是适合你的云战略呢?   在使用内部云计算时,你无法从云计算的核心价值中受益:即资源共享。你购买硬件设备和租赁数据中心空间,按需支付运营费用,就像以前一样。甚至对于云基础架构的技术优势而言(举例来说标准化的高水平),这种优势也是打了折扣的。   很多厂商都注意到了这种趋势,开始在市场上大力推广内部云产品并部署在现有的服务器上。将甲骨文的 ExaLogic 云产品作为最新的 “快速私有云” 入门级产品,希望能在云领域占有一席之地的惠普,IBM 和其他企业级 IT 提供商也加入其中。   不过使用私有云而不是公有云还有一些实质性的原因,这些原因主要是:   法规使然。你的企业制定了专门的法规,不允许服务器上保存的数据脱离企业的控制。笔者发现这种要求并不多见,但是在很多情况下大家对法规遵从的问题知之甚少。   性能也很重要。如果你正在使用发送和接收大量数据的应用软件,那么互联网的延迟则是不能接受的,那么公有云可能不不太适合你。不过这并不能成为某些应用软件设计匮乏的借口。   你还有专门的安全需求。对使用私有云的呼吁通常是出于对使用公有云的安全担忧。不过这些担忧并非总是这么名正言顺:云上的安全性,无论是公有云还是私有云,都是存在的。有许多并不安全的私有云,也有很多安全的公有云。你必须根据自身需求而不是简单的担忧来做出决定。   你还有专门的应用软件需求。尽管多数内部软件资源在云上是可以如法炮制的,但是也有个别软件是做不到这一点的。你可能会有专门的应用软件需求,诸如专门的数据存储,或者其他让你留守在私有空间内的技术问题。   希望当你对使用私有云来代替公有云发出呼唤时,你能够理性的看待这些解决方案——而不仅仅是为了让数据密封在你的服务器当中。

  • 私有云管理工具大比拼 at 2014年11月20日

    构建私有云会面临哪些障碍?

    所谓规模的定义,就是具备高产高效地运作大规模服务器集群的能力,而如今该问题是否仍旧困扰着构建云环境的人员呢?Amazon 因其 Web Services(AWS)平台在 2007 年备受关注,这是一项庞大且蒸蒸日上的经营。借助规模经济带来的巨大资源、异常低廉的价格以及灵活地服务供应,无疑使 AWS 成为人们关注的焦点。   许多效仿者妄图复制其成功之处,但却屡屡碰壁。初创公司 Eucalyptus 与云计算平台制造商 Enomaly 都曾因试图复制 AWS 的技术和交付模式,并号称使云计算更具吸引力的规模经济,但以失败收场。NASA 最初采用 Eucalyptus,而后选择构建属于自己的云环境,而 Enomaly 也在大多科技领域备受谴责。   他们可以构建一个类似 AWS 的环境,但却不可避免的深陷 “泥潭”,运行缓慢以至于只能控制少量的服务器。然而,现在用户和集成商却说,随着云平台雨后春笋般地增长,用户选择的范围越来越大,早期发展的痛苦已经过去。   “对于 NASA 等组织来说,规模是个问题。” IDC 的 Web 托管服务与电信服务研究主管 Melanie Posey 表示。对于服务提供商和主机托管商来说,规模仍旧是个问题,二者正成为云计算的主要消费者,他们飞速地成长,并想要展示出效率与收益。但对于构建私有云的企业来说,规模并非主要问题。   Posey 谈到,像全球银行业与保险业这样以科技为主的行业已经开始涉入云计算。他们中大部分已将明显的使用案例运行于云计算环境之上,诸如软件开发和能显示出使用弹性层次的复杂计算应用。云计算时代已经来临,企业们已经看到了未来的方向。   “当你看到底线,你就不会再想要永无止境,不断增长的 R&D(研究与开发)。” Posey 说:“从某种程度上来说,企业想要集中精力实现的是实施 IT 后所带来的收益,而非科技本身。”   Posey 说,科技的成熟使得像 Eucalyptus 这样为企业构建功能与价值私有云的软件成为可能,但是不像托管供应商,企业云计算环境是与现有 IT 系统相孤立的。甚至一个拥有数千节点的大型软件开发云,也只是一部分 IT 交付。   规模的问题将被再次抛在脑后,根据 Posey 的观点,云计算基础设施实践逐渐接管现有的 IT,这可能将花费数年时间。随着多样性的应用需求彼此碰撞在一起,新的问题会再次出现在意想不到的地方。但目前云计算的模式发展还是十分清晰的。   规模问题逐渐解决,那么下一个云计算的主要问题会是什么?   “将应用迁入到云中依旧是个难题,但第一步似乎已经解决了。” MomentumSI 的 CEO Jeff Schneider 说到。MomentumSI 是一家从事私有云构建的咨询公司。Schneider 说,他的公司为运行私有云并有数千节点的重要金融机构提供服务。   MomentumSI 以 Eucalyptus 作为云计算平台,拥有自己的云计算软件栈,在图形用户界面(GUI)上有 newScale,管理开发项目方面有 rPath。他说,针对企业云计算的软件开发是短期项目,因为在这里管理能看到实际的投资回报(ROI)。   他说,比起扩大云计算平台规模到更大的环境,办公体制和企业 IT 文化更令他头痛。这也是为什么他试图舍弃劳动密集型地独立构建云计算环境,而转向组合不同供应商产品以实现 MomentumSI 自助式服务私有/混合云计算定制化供应。   Schneider 透漏,随着人们享受到云计算快捷服务所带来的舒适,改变便已经来临,但是他也同意 Posey 的观点,移植应用到云计算环境正变得形式怪异且问题重重。他发现,尤其是开发者正步入业务流系统。   “如何获得更多的虚拟化基础设施已不再是问题。” 他说:“而由此产生的问题,则是如何将整个应用生存周期移动到云计算平台上。”   初创云计算供应商 GreenQloud 的 CEO Eirikur Hrafnsson 谈到,最大的问题在业务上,而非技术。GreenQloud 依靠 Iceland 并使用早期版本的开源 Eucalyptus 构建云计算环境,但最后它却选择了 Eucalyptus 的开源竞争对手 Cloud.com。   他指出,早期的努力证明不可能实现规模,但是新版本的 Eucalyptus 不存在这个问题。他选择了 Cloud.com 主要在于它在扩展和特性上表现更出色。   “选择 Cloud.com 归根到底是因为它的架构方式,就像我们的存储架构等系统。” 他说。   Hrafnsson 说,他更担心处理琐碎的硬件支持链和数据中心管理,而非云计算平台软件。   他认为目前有很多可行的云计算平台。他最初的测试版(beta)环境很小,他希望明年四月份之前在不做重大改变的前提下将其节点扩大到 1 万个。   更新   Enomaly 的 CEO Richard Reiner 在电子邮件中申明公司没有收到来自客户的抱怨。Reiner 表示公司自 2004 年开始发展,技术已经十分成熟,尤其在扩展性方面。   “Enomaly ECP 与 AWS 有着同样的历史,也已经发展相当成熟且拥有巨大的可扩展性。” 他说。

  • 私有云管理工具大比拼 at 2014年11月20日

    本文,我们将对业内领先的第三方私有云管理工具进行比较,并为你的环境选择提出正确的建议。 随着越来越多的企业采用了私有云,对于相关管理软件的需求也变得迫切而明显。但是,并不是所有的私有云管理工具都是在同等情况下被开发出来的。IT 专业人员必须确保他们所选择的工具是最适合他们组织需求的。 所有主要的私有云供应商都提供了他们自己的用于开发和管理私有云的产品。微软公司提供的 System Center 虚拟机管理器可作为 Hyper-V 的管理工具,而 Vmware 公司则推出了 vCloud Suite。类似地,Citrix 公司提供了 CloudPlatform,而 Red Hat 公司有 CloudForms。 虽然这些管理工具提供了构成私有云计环境的虚拟化基础设施的基本功能,但是无数第三方产品所交付的管理功能要远远强大于平台工具,例如多平台支持、回收浪费空间的功能以及用于性能最优化的虚拟机(VM)监控等。 本文将为读者介绍服务器虚拟化/私有云管理工具产品的四个基本采购标准,并比较了四个最成熟、最流行的第三方私有云管理工具产品,具体包括: VMTurbo 运行管理器版 Embotics 的 vCommander 企业云管理软件 Solarwinds 的虚拟化管理器 WhatsUp Gold Whats Virtual 采购标准 在选择一个私有云管理工具产品时,有很多应当考虑的因素。首先,最重要的考虑因素应当与私有云的核心功能直接相关,例如虚拟网络配置或自助服务配置。但是,这一类别的功能都被集成在了虚拟机管理程序供应商的私有云软件中了。 私有云管理软件更适于深入地分析你的虚拟基础设施以及在其基础之上运行的工作负载。这种情况下,当选择管理产品时用于评估的四种最重要的标准分别是: 诊断功能:服务器虚拟化和私有云基础设施都是复杂的。好的管理工具必须能够在问题影响到虚拟机(VM)之前找到并且解决它们。 多平台支持:更多的环境正在使用多个虚拟机管理程序以匹配经济性和工作负载的需求。管理工具支持的平台数量越多,那么你所面临的复杂性也就越高。 资源监控:合适的管理工具必须能够回收被浪费的存储资源,并确保你的物理资源正在被最为高效地使用。 性能跟踪:让你的虚拟基础设施保持平衡并在最高效率状态下运行是最大限度实现你的私有云部署的关键。 诊断功能 企业级服务器虚拟化基础设施是复杂的,因为它们涉及到很多不同组件协同运行以提供可靠工作负载的托管。当基础设施被配置为一个私有的或混合的云而运行时,这一复杂性将呈指数级增长,因为额外的抽象层。在导致发生真正问题时,这个复杂且分布广泛的基础设施将让检测和修复潜在问题变得困难。因此,任何好的管理产品都应当能够检测出将可能导致严重问题的情况。 当谈及诊断功能时,SolarWinds 的虚拟化管理器具有明显的优势。该软件包括了许多可用于发现有问题情况的不同功能。但是,这个软件最常用的诊断功能是虚拟化相关性映射和历史取证功能。这个软件非常职能,它能够了解虚拟数据中心内对象(例如虚拟机、主机以及数据存储等)之间的关系。你不仅可以映射对象之间的依赖关系,而且你可以根据软件所维护的历史数据来跟踪这些依赖关系随时间的变化。 VMTurbo 有不同的诊断方法。私有云监控软件通常会生成警告来通知管理员们发生了问题。VMTurbo 不是基于警告的解决方案。相反,它会持续不断地对工作负载进行监控,并自动地做出调整以防止问题的发生。 WhatsVirtual 提供了使用 WhatsUp Gold Alert Center 的诊断警告。Alert Center 是一个提供了企业级警告和跨整个企业发生事件通知的单一窗口界面,所以当产生多个警告时,与服务器虚拟化基础设施相关的警告都会在相同的界面中显示。 虽然 Embotics 的 vCommander 也提供了很好的多样化功能集,但是它缺少除基本性能故障诊断以外的诊断和修复功能。 多平台支持 咋看之下,支持多个供应商可能似乎并不是很重要,尤其是如果你正在使用单一供应商的工具。但是,我们并不能保证实际应用永远都是这样,很多企业都会部署多个虚拟机管理程序工具以匹配经济性和实际工作负载的要求。例如,在过去的一年中,很多 Vmware 的企业用户都开始引入了 Hyper-V 服务器,这是因为在某些情况下使用 Hyper-V 要比购买额外的 Vmware 许可证更具成本效益。 当谈及支持多个供应商时,Embotics 是最佳的选择。它的 vCommander 是专为在 Vmware 和微软环境中运行而设计的,并且,尽管它不提供对 Citrix 的支持,但是它确实支持亚马逊 EC2 和惠普的公共云。 当谈及不需要公共云支持的私有云时,那么则是 VMTurbo 具有优势了。该公司支持 Vmware 的 vSphere、微软的 Hyper-V、Citrix 的 XenServer 以及 RedHat 的 RHEV。 与之形成对比的是,SolarWinds 的虚拟化管理器只支持 VMware 和 Hyper-V,而 WhatsVirtual 只支持 vSphere5 环境。

  • 啊,是有点远,不过北京就是地铁便宜。 中午吃饭,能过来见见面,就过来吧。

  • [i=s] 本帖最后由 laofo 于 2014-11-19 17:32 编辑

    朋友聚会,欢迎各位带男、女朋友,带家人过来。本来就是周末和家人团聚的日子,既然我们不能在家陪他们,不如就大家一起出来吃 :)

  • 你不去试试? 行者先生

  • Cross-platform C# Project generation for every platform. Define your project content once and compile code for every platform, in any IDE or build system, on any operating system.

    [b] One executable[/b]

    Protobuild ships as a single, 120kb executable in your repository. Users don't need to install any software; just double-click Protobuild to generate projects.

    [b] No duplication[/b]

    Don't duplicate support for each platform in multiple C# projects. Protobuild generates C# projects for any platform from a single definition, with options available to include and exclude resources based on the target platform.

    [b] Two-way project sync[/b]

    Reduce hand editing of project definition files. Adding or removing files in your IDE synchronises back to the project when running Protobuild.

    [b] Flexible configuration [/b] Protobuild offers multiple levels of customization for your projects. From simple option toggles to complete customization of project files, Protobuild allows you to output projects in the exact format you require.

    [b] Simplified libraries[/b]

    Including a third-party library that uses Protobuild is as simple as git submodule add. Protobuild automatically loads subfolders for additional projects and allows them to be referenced.

    [b] Build only what you need[/b]

    Protobuild provides a powerful dependency system, which can exclude code in projects when no consuming projects use that functionality. This can also be used to provide alternate implementations of functionality.

    [b] No cruft[/b]

    Project definitions in Protobuild contain only what is needed to generate your projects. Irrelevant properties and C# project configuration (such as developer-specific settings) are never tracked in source control.

  • Expanding The Cloud - Introducing The Amazon EC2 Container Service

    By Werner Vogels on 13 November 2014 Today, I am excited to announce the Preview of the Amazon EC2 Container Service, a highly scalable, high performance container management service. We created EC2 Container Service to help customers run and manage Dockerized distributed applications.

    Benefits of Containers Customers have been using Linux containers for quite some time on AWS and have increasingly adopted microservice architectures. The microservices approach to developing a single application is to divide the application into a set of small services, each running its own processes, which communicate with each other. Each small service can be scaled independently of the application and can be managed by different teams. This approach can increase agility and speed of feature releases. The compact, resource efficient footprint of containers was attractive to sysadmins looking to pack lots of different applications and tasks, such as a microservice, onto an instance. Over the past 20 months, the development of Docker has opened up the power of containers to the masses by giving developers a simple way to package applications into containers that are portable from environment to environment. We saw a lot of customers start adopting containers in their production environments because Docker containers provided a consistent and resource efficient platform to run distributed applications. They experienced reduced operational complexity and increased developer velocity and pace of releases.

    Cluster Management Difficulties Getting started with Docker containers is relatively easy, but deploying and managing containers, in the thousands, at scale is difficult without proper cluster management. Maintaining your own cluster management platform involves installing and managing your own configuration management, service discovery, scheduling, and monitoring systems. Designing the right architecture to scale these systems is no trivial task. We saw customers struggle with this over and over.

    Leveraging AWS When we started AWS, the thinking was we could use Amazon’s expertise in ultra-scalable system software and offer a set of services that could act as infrastructure building blocks to customers. Through AWS, we believed that developers would no longer need to focus on buying, building, and maintaining infrastructure but rather focus on creating new things. Today with EC2 Container Service, we believe developers no longer need to worry about managing containers and clusters. Rather, we think they can go back to creating great applications, containerize them, and leave the rest to AWS. EC2 Container Service helps you capitalize on Docker’s array of benefits by taking care of the undifferentiated heavy lifting of container and cluster management: we are providing you containers as a service. Furthermore, through EC2 Container Service, we are treating Docker containers as core building blocks of computing infrastructure and providing them many of the same capabilities that you are used to with EC2 instances (e.g., VPCs, security groups, etc) at the container level.

    Sign up here for the Preview and tell us what you think. We are just getting started and have a lot planned on our roadmap. We are interested in listening to what features you all would like to use. Head over to Jeff Barr’s blog to learn more about how to use EC2 Container Service.

    http://www.allthingsdistributed.com/2014/11/amazon-ec2-container-service.html

  • 配置管理工程师 深圳 at 2014年11月10日

    E-mail: [email] lisa_song@epam.com[/email]

    Vision Shenzhen Business Park No. 9 Gaoxin 9th South Road Building 8, Floor 8 Shenzhen Hi-Tech Industrial Park Nanshan District Shenzhen China 518057

  • 软件配置管理工程师-PMO at 2014年11月10日

    QQ 号很不错,很招摇,呵呵