• 工具和语言其实没太大关系,但是也有关系。因为一旦选择了相应语言,必然有比较适合这个语言的工具链

  • 1) 还有专门管项目的SCM? 甭管有没有,撸起袖子两个人一起干吧 2)项目看重,也的确需要就创建一个。如果根本不需要,建再多也没用。创建好相关job后,可把维护管理单独job的权限下放给项目组。

  • 汇中公司 配置管理岗位 at 2016年10月20日

    [i=s] 本帖最后由 laofo 于 2016-10-20 15:46 编辑

    此岗位title为配置管理主管/经理岗,属质量保证中心部门,汇报对象测试部总监。部门分为测试服务部和配置管理部两个业务,其中配置管理部,包含配置管理经理1名,配置管理员1名。22-26K 工作地点在双井 E-mail:[email]wupeng@huizhongcf.com[/email] 18210280939(微信可搜索添加)

  • SVN 的 merge 命令操作报错 at 2016年10月18日

    能不能贴文字,图片太模糊了看不清。

    看提示是不是让你升级到svn 1.7 以上?

  • 第一,互联网是灵丹妙yao吗?

    在这场互联网创业大潮中,创业者动辄高喊用互联网改变世界,动不动叫嚣“去----中jie”,“让信息和交易透明”,结果用户却发现,通过这样的平台实现交易,同样存在着个中不为人知的黑幕。当然,黑马哥不是全面否定互联网的作用,但是真正利用互联网实现信息对称,还是拿着互联网当幌子,结果迥异。

  • 第二,是互联网+,还是+互联网?

    最近,黑马哥关注了好几个从传统行业转型互联网创业的项目,发展状况都是非常稳健。相比之下,很多宣称完全用互联网思维创业的项目,虽然刚开始发展势头迅猛,而后却慢慢无声无息,或者是倒掉了。因此黑马哥现在也是更看好有传统行业背景的人来做互联网创业项目,因为不管是教育、医疗亦或是二手车市场,水都很深,没有点积淀和能耐,很难趟过去。

    第三、烧钱投放广告引流量这一招还管用吗?

    纵观瓜子二手车开拓市场的策略,其实还是当年赶集依赖传统的广告投放和公关策略那一套。当年赶集和58同城广告大战,更多意义是传统形式的广告战,而不是围绕着UGC、PGC来运营。

    现在做瓜子,杨浩涌还是沿袭了当年的营销模式。但是世易时移,当下的营销,更讲求与用户互动,通过各种不同的营销模式“激发”用户自发形成二次甚至多轮传播。

    相比之下,瓜子花在广告上的费用的确不少,但是都变成了一次性的,而且是单向的传播,而没有很好进行联动,把广告变成多维度的展现和传播。这样一来,即使刚开始引流效果不错,后面也是难以维系的。

    说到底,黑马哥觉得,其实创业的成功基因也是很重要的,虽然从个人创业的角度来说,杨浩涌算得上成功,但从运营公司的角度来说,只能算差强人意。经历了这场高层震荡后,下一步瓜子从人事上、从运营上、从市场开拓上会如何做出改变,黑马哥也是静观其变了。

  • 级别 p6, 没有股票,北京高德

  • 最近的确懒惰了

  • 帅哥你去吧,我不得行。。。。不过据说最近华为云挖 jd 墙脚挖的厉害。 召集了一大票人

  • 这个职位不错。支持一下。上海的朋友可以看看

  • Coolblue 的持续部署 at 2016年9月07日

    原文如下

    Written by Paul de Raaij on 20 July 2015 CONTINUOUS DEPLOYMENT OF OUR SOFTWARE

    Just developing software is not enough. You’ll need to get it on production and you want to be sure that you deploy quality software and not breaking software. In this article, part of our microservices journey, I’ll describe how we have set up our deployment pipeline in a way that developers can do their own deployment complying to our standards.

    LANGUAGE AGNOSTIC The deployment pipeline as we have created it is language agnostic. Which means that the process to assure quality and deploy software for a PHP application is the same as the process for a .NET application. The used tools for quality assurance and packaging might differ, but the process is equal. This ensures simplicity and reduces the cognitive load on developers to understand the process for all microservices, which might be developed in different languages.

    DESIGNING THE DEPLOYMENT PROCESS When we start designing our deployment process we were looking for a solution that could support us in deploying high quality software. At that point we were already using GitHub and their pull-request system for our code versioning. What we wanted was a system that could verify the quality of the pull request, before merging it into the main repository.

    If a pull request is validated and passed all checks, it should be allowed to be merged onto the master repository. All steps after the merge should be executed automatically and result eventually into a deployment on production. The diagram below displays a bird overview.

    continuous-deployment.002

    AUTOMATING THE PROCESS Each build step consists out of a set of actions which we define in the project itself. These actions are defined in ant build scripts and can be anything from creating a folder to calling a command line tool which does the actual inspection. These build scripts are not triggered manually. We use a build server to guide the whole build process. A build server is able to react on triggers and start the appropriate action.

    For our build process we’ve decided to use TeamCity as build server. There are many (open-source) alternatives available, but TeamCity fitted our needs perfectly.

    CONTINUOUS INTEGRATION This step can be triggered for several reasons. It can be the creation or update of a pull request by the developer or it is caused by the merge of a pull request on the main repository. Either way, the purpose of the step is the same. Validating if the quality of the entire repository matches our standards.

    When executing this step, the build server will execute a variety of quality tests which basically are split up into two groups, automated tests and static code analyzers.

    The group automated tests consists out of unit testing and functional testing. As an example you can think of tools as NUnit, PHPUnit or Behat. These tests are present to validate if the tested functionalities are still matching the expectations of the tests. They are mainly used for regression testing, which boils down to the question of “does my change do what I expect and does it not break any other functionality”.

    Static code analyzers consist out of tools that generate code metrics, check coding violations. As an addition to scripting languages, linting tools like PHPLint will check if the code can be executed at all. These kind of tools do not aim on functionality, but focus on code quality itself. Is your code consistent in indentation, do you consequently use the same naming conventions or do you have a lot of duplicate code in your repository.

    PACKAGING If the continuous integration step succeeds we move over to the next step, packaging. The purpose of this step is simply to create a single deliverable which can install itself onto servers.

    Within Coolblue we can differentiate two types of packages. For Linux systems we use rpm packages and the RedHat package manager. Windows systems will be packaged into NuGet packages and deployed via Octopus Deploy.

    We deliberately have chosen to create packages that are close to the operating system, which makes life a bit easier. For example, by choosing for the RedHat package manager we chose a well known process. Known to developers and system administrators since they all have more than basic knowledge of the yum command. But also a known process for the distribution of packages. As a RedHat user you know how repositories work, how they combine with yum and its process.

    There a lot of useful tools in the open source community like Capistrano that could help with distribution of artifacts to a production environment. All of those tools have their own pros and cons. For us it felt that we would add a lot of unnecessary complexity to our pipeline and process.

    Packaging and delivering .NET services is something different. There is no problem to package .NET applications into a Windows native package format, but hooking into a generic package management on Windows is not possible. So, we tackled this problem a bit differently.

    .NET applications will be packaged into the NuGet package format. NuGet being an open source package manager for Windows environments. The build packages then will be distributed in the later steps. The generation and deployment of NuGet packages is worth a whole article on its own which we will publish in the future.

    CONTINUOUS DELIVERY We have created a package containing our inspected code repository, now it is time to get it actually on a server so it actually can be used. This third step of the build pipeline will publish to development and accept servers.

    Before I start to explain what we do exactly in this step it is wise to get a clear vision how continuous delivery differs from continuous deployment. The difference between the two is small, but significant.

    Continuous delivery are all automated deployments of packages to any except production. Continuous deployment describes the automated deployment of a package to production. So during this step our goal is to get the generated package on different servers so the deliverable can be checked and is available for other team members. The deliveries done by this step might differ per project. For example, some projects might have an acceptance environment we need to publish to, others might not.

    CONTINUOUS DEPLOYMENT With the continuous deployment step we reach the end of our deployment pipeline. In this step we actually deploy the application onto a production server. For Linux environments that is adding the package to our internal repository server. Windows environments will push and install the package onto the production nodes.

    For us this step currently isn’t automatically triggered. A developer manually needs to trigger this build step to get his changes onto production servers. This isn’t something we want. but is necessary due to the lack of automated post-deployment tests.

    In post deployment tests we want to check if the deployment went successful, something we currently do manually. For example by checking if the homepage returns a HTTP status code of 200. We want to check if we are still able to add a product to the shopping cart. Is our checkout process still available. These are the most viable tests we have after an deploy since they are critical to our business.

    When we automate these post deployments tests and are able to do an instant revert of the deployment we will automate the trigger to have true continuous deployment.

    In a high overview this article describes our continuous deployment process. A process which is working fine for now, but we are still working on to optimize. Objectives on our radar are for example the automated post deployment tests and implementing canary deployment.

    http://devblog.coolblue.nl/tech/continuous-deployment-of-our-software/

  • 这个直接问IBM 公司的技术支持就行。这个东西价格不低,用的人少。

  • Kubernetes、Docker Swarm、Mesos作为时下流行的容器框架受到了广大开源爱好者和企业的关注。面对用户需求不断的升级和自身产品不断的改进更新,其功能愈发趋于完善,迭代版本也不断的发布。因为篇幅过长,分上下两篇推送,本文为下篇。

    1. 弹性缩放和理想状态的调整

    Kubernetes通过RC设定Pods数量,其管理器可以自动保持和维持以确保服务的高可用性。Kubernetes Master会时刻检查集群状态,对与既定规则不符的pod进行调整。如,既定当前3份pods一簇运行当前服务,其中一个pod因为某些原因失败或停止,kubernetes管理器会尝试指定的其它worker节点上新规重启pod。针对实时使用的资源大小也会监控,调整并分配最佳数量(在预定上下限内)。

    Docker Swarm对于每项服务,你都可以明确想要运行的任务数,当你增加或减少任务数时,Swarm管理器可自动增加或删除任务,以保持理想状态。其管理节点持续对集群状态进行监控,并对实际状态和你明确的理想状态之间的差异进行调整。例如,当你创建一个服务来运行一个容器的10个副本,托管其中两个副本的工作机崩溃时,管理器将创建两个新的副本,代替之前崩溃的两个。另外,管理器会将新的副本分配给正在运行且可用的工作机。

    Mesos通过Marathon管控。Mesos master给slave节点分配任务,如果需要调整,它就会通知marathon。Mesos slave负责运行容器,并且报告当前节点的资源使用情况。

    1. 网络方面

    Kubernetes官方支持Flannel、Calico、Romana、Contiv等众多组网方案。保证每一个pod都拥有一个扁平化,共享网络空间的IP,通过该IP,pod就可以跨网络和其它物理机或pod进行通信。一个pod一个IP创建了一个干净,反向兼容的模型。在该模型中,从端口分配、网络、域名解析、服务发现、负载均衡、应用配置和迁移等角度,pod都能被看成虚拟机和物理机;

    Docker Swarm使用多主机网络,保证可以针对用户的服务指定一个覆盖网。Swarm管理器初始化或更新应用时,它会自动将地址分配给覆盖网上的容器;

    Mesos在1.0.0保证了对容器网格标准CNI的支持,CNI标准是多家网络厂商参与制定的容器网络标准,Mesos兼容了CNI标准,相当于间接的支持了VxLAN、DC/OS overlay、Calico、Weave、Flannel等多种网络技术。这是继容器IP功能后,Mesos的又一重要的网络功能。

    1. 服务发现和负载均衡

    Kubernetes通过Services定义了由容器所发布的可被发现的服务/端口,以及外部代理通信。服务会映射端口到外部可访问的端口,而所映射的端口是跨多个节点的Pod内运行的容器的端口。也可利用nginx-ingress等外部负载均衡器。其服务内部均衡使用kube-proxy。

    Docker Swarm管理节点给Swarm集群上的每项服务分配一个唯一的DNS名称以及负载均衡的运行容器。可通过嵌入Swarm的DNS服务器对集群上运行的各个容器进行查询。

    Mesos则使用Mesos-DNS。Mesos上的应用和服务可以通过DNS的方式来发现对方。Mesos-DNS的特点是轻量、无状态,易于部署和维护。

    1. 安全方面

    Kubernetes引入了RBAC(基于Role的访问管理控制)功能,该实现源于OpenShift项目。(该RBAC系统建立于Kubernetes资源之上,可以让角色、权限和关系动态作用于first-class的API交互,将之前Kubernetes版本的authZ设施中的静态平面文件放置于其后面。该RBAC功能被添加进了Kubernetes 1.3中,简化了针对不同企业群体、团队或者会计制度的关于多租户环境中的创建情况。)

    Docker Swarm通过各个节点强制TLS相互授权和加密,从而确保自身与其他所有节点之间的通讯安全。可选择使用自签的根证书或自定义根CA证书。

    Mesos在1.0.0中通过提供更细粒度的授权验证机制对此作出了响应。首先,在1.0.0中,Mesos的所有敏感数据入口都是经过SSL/TLS加密的;其次,Mesos管理员现在可以通过配置ACLs来限制用户只能在WebUI/API看到自己的任务了,而这就是企业用户的must-have要求;最后,Mesos也提供了完善的authorizer接口,企业用户可以通过该接口添加自己特有的安全策略。

    1. 滚动升级方面

    三者都保持相同的功能,即升级时,可以将新老版本服务共存,逐步减少老版本数量增加新版本数量,直至全部更新。如果出错,可以回退到升级前的既定版本。

    1. GPU支持

    三者都对NVIDA GPU有一定程度的支持,可是在调度的时候将NVIDA GPU作为资源进行调度。

    1. 社区优势和合作方面

    Kubernets基于Google数十年运行大规模容器的经验,RedHat多年部署和管理企业中的开源软件的经验,CoreOS的敏捷开发经验和很多其他组织和社区成员的优势。亦有全球级大技术公司的支持,比如微软、华为等。

    Docker Swarm属于Docker自身设计,研发和维护;

    Mesos作为老牌技术也有着广泛的支持团队,如IBM,Microsoft,Nvidia。其中,IBM已经成为紧随Mesosphere之后的第二大Mesos代码贡献厂商,未来IBM会在Mesos的optimistic offers,资源分配优化和兼容POWER平台方面投入力量。同时,Mesos也推出了Mesos运行在Micrisoft上的试验功能。

    1. 其他

    Kubernetes在v1.3中增加了vsphere_volume卷管理。通过结构体VSphereConfig可以看出来,kubernetes的vsphere卷管理插件可以直接连接到VMware vCenter上。如果kubernetes是部署在vsphere上面的虚拟机里面,那么可以通过给虚拟机挂载硬盘的方式来给kubernetes添加卷。

    Mesos近期内在做的有关容器的开发。第一个是Nested Container,当我们跑容器的时候,可以在容器里面将它分割成更小的容器。当大容器里面可以管理一定资源boundary的时候,重新再贴上更小的容器,这个对于跑CI/CD、Jenkins在Mesos上时很有用,每一个Jenkins都可以有自己的一个资源上限,它要跑十个CI job,Mesos就把已经有的资源给它们,但是它们确保不会跑出Mesos给的资源上限。

    VM support就是Mesos可以不跑容器,直接跑VM,对于比较传统的一些IT企业来说,他们还没有办法从OpenStack或者从VM里面直接跳到容器里面,所以这是非常重要的一个功能。

  • 北京 SCM-安邦保险集团 at 2016年8月18日

    现实中,的确遇到过很多这样的事情。

    公司招人的时候,没遇到合适的候选人;当有不错的人选想要换工作的时候,公司有没名额了。

  • 要我么?我去给你帮工

  • svn 500 error,求大神帮助 at 2016年8月15日

    如果多个svn库共用一个apache配置,其他的库没问题,就这个库有问题。那么很可能还是数据问题。

  • 你的svn是本机搭建的?

  • 公司肯定是赚的,只不过看老板是否大方,赚多赚少的问题

  • 没待过,不是很清楚,难道配置管理不算人头里的??上次有个银行外包也是配置管理能给到30

  • 银行外包人头费很贵的。这个怎么给真么少?何况还要出差