Simple-Jekyll-Search
A JavaScript library to add search functionality to any Jekyll blog.
Find it on npmjs.com
idea from this blog post
Promotion: check out Pomodoro.cc
Demo
Install
bower install --save simple-jekyll-search
# or
npm install --save simple-jekyll-search
Getting started
Place the following code in a file called search.json
in the root of your Jekyll blog.
This file will be used as a small data source to perform the searches on the client side:
---
---
[
{
"title" : "MIDI MSB/LSB Explained",
"category" : "",
"tags" : "midi",
"url" : "/2024/03/midi-msb-lsb-explained/",
"date" : "2024-03-18 01:34:00 -0400"
} ,
{
"title" : "Playing Video in your Subaru Starlink System",
"category" : "",
"tags" : "subaru, starlink, video, photos",
"url" : "/2023/09/subaru-starlink-video-photos/",
"date" : "2023-09-16 17:53:00 -0400"
} ,
{
"title" : "Generating Jekyll Posts from an External Source",
"category" : "",
"tags" : "jekyll, generators, posts, cms",
"url" : "/2023/09/generating-jekyll-posts-from-an-external-source/",
"date" : "2023-09-16 17:31:00 -0400"
} ,
{
"title" : "RubyGems SSL Error with jRuby",
"category" : "",
"tags" : "jruby, rubygems, bundler, ssl",
"url" : "/2023/03/jruby-bundler-unrecognized-ssl-message/",
"date" : "2023-03-22 18:08:51 -0400"
} ,
{
"title" : "Long time nuclear waste warning messages",
"category" : "",
"tags" : "nuclear, waste, warning",
"url" : "/2021/12/long-time-nuclear-waste-warning-messages/",
"date" : "2021-12-09 15:37:18 -0500"
} ,
{
"title" : "Updating RBEnv on Raspberry Pi",
"category" : "",
"tags" : "raspberrypi, rbenv",
"url" : "/2021/06/updating-rbenv-raspberry-pi/",
"date" : "2021-06-14 10:19:55 -0400"
} ,
{
"title" : "Generating a New Rails Project",
"category" : "",
"tags" : "api, webpacker, editorconfig",
"url" : "/2020/12/generating-a-new-rails-project/",
"date" : "2020-12-10 10:38:55 -0500"
} ,
{
"title" : "How to Insert Special Entities in React",
"category" : "",
"tags" : "special entities, es6, react",
"url" : "/2020/12/html-special-entities-in-react/",
"date" : "2020-12-09 16:14:10 -0500"
} ,
{
"title" : "A New Solution for Personal and Small Business Websites",
"category" : "",
"tags" : "gatsby, headlesscms, render, github-pages",
"url" : "/2020/12/jamstack-websites/",
"date" : "2020-12-05 11:02:40 -0500"
} ,
{
"title" : "VueJS - Built-In & Reserved Tags",
"category" : "",
"tags" : "vue",
"url" : "/2020/03/vuejs-reserved-tags/",
"date" : "2020-03-28 18:42:00 -0400"
} ,
{
"title" : "Updating Vue-Loader to v15 with Webpacker",
"category" : "",
"tags" : "webpacker, rails, vue, vue-loader",
"url" : "/2020/03/vue-loader-webpacker/",
"date" : "2020-03-01 14:37:00 -0500"
} ,
{
"title" : "Fixing audio for Steam (Rust) on Mac OS X Mojave",
"category" : "",
"tags" : "rust, steam, mojave",
"url" : "/2019/11/rust-steam-mojave-audio/",
"date" : "2019-11-08 02:15:00 -0500"
} ,
{
"title" : "Fixing file and directory permissions recursively",
"category" : "",
"tags" : "cmdline, linux",
"url" : "/2019/06/recursive-file-and-directory-chmod/",
"date" : "2019-06-15 01:27:00 -0400"
} ,
{
"title" : "Amazon Web Services",
"category" : "",
"tags" : "cloud9, aws",
"url" : "/2019/06/amazon-web-services/",
"date" : "2019-06-14 00:37:00 -0400"
} ,
{
"title" : "Jumpstart Guide to Ansible",
"category" : "",
"tags" : "ansible, cloud9, aws, software provisioning, configuration management, git",
"url" : "/2019/04/ansible/",
"date" : "2019-04-10 13:00:00 -0400"
} ,
{
"title" : "Sidekiq with Cloud66",
"category" : "",
"tags" : "hosting, cloud66, sidekiq",
"url" : "/2018/04/cloud66-sidekiq/",
"date" : "2018-04-24 23:08:00 -0400"
} ,
{
"title" : "SSH issues with Mac OS X High Sierra",
"category" : "",
"tags" : "high sierra, ssh",
"url" : "/2017/12/high-sierra-ssh-issue/",
"date" : "2017-12-12 13:08:00 -0500"
} ,
{
"title" : "Markdown Links and 80 Character Line Length",
"category" : "",
"tags" : "markdown",
"url" : "/2017/11/markdown-links-80-character-line-length/",
"date" : "2017-11-25 09:55:00 -0500"
} ,
{
"title" : "Fitter Happier",
"category" : "",
"tags" : "radiohead, voice synthesis",
"url" : "/2017/08/fitter-happier/",
"date" : "2017-08-07 15:00:00 -0400"
} ,
{
"title" : "FileMerge (also known as opendiff)",
"category" : "",
"tags" : "filemerge, opendiff, xcode",
"url" : "/2017/08/opendiff/",
"date" : "2017-08-07 15:00:00 -0400"
} ,
{
"title" : "Running a Bitcoin Core Full Node",
"category" : "",
"tags" : "ubuntu, bitcoin",
"url" : "/2017/05/running-a-bitcoin-core-full-node/",
"date" : "2017-05-26 00:08:00 -0400"
} ,
{
"title" : "Configuring a New Ubuntu Server with Sudo",
"category" : "",
"tags" : "ubuntu, sudo, sshd, security",
"url" : "/2017/05/configuring-new-ubuntu-server-with-sudo/",
"date" : "2017-05-26 00:08:00 -0400"
} ,
{
"title" : "Detecting if WebMock is enabled for Net::HTTP",
"category" : "",
"tags" : "WebMock, HTTParty",
"url" : "/2017/05/detecting-webmock-enabled-net-http/",
"date" : "2017-05-17 17:43:52 -0400"
} ,
{
"title" : "Static Hosting with Neocities",
"category" : "",
"tags" : "static, neocities, jekyll",
"url" : "/2017/05/static-hosting-with-neocities/",
"date" : "2017-05-12 12:43:51 -0400"
} ,
{
"title" : "Intro to Tmux",
"category" : "",
"tags" : "tmux, screen",
"url" : "/2016/06/tmux-intro/",
"date" : "2016-06-30 13:05:00 -0400"
} ,
{
"title" : "Unbricking TP-Link TL-WDR4300",
"category" : "",
"tags" : "wifi, tp-link, wdr4300, openwrt, mesh",
"url" : "/2016/04/tp-link-wdr4300-recovery/",
"date" : "2016-04-14 00:00:00 -0400"
} ,
{
"title" : "Getting Started with IRSSI",
"category" : "",
"tags" : "irc",
"url" : "/2016/04/getting-started-with-irssi/",
"date" : "2016-04-09 00:00:00 -0400"
} ,
{
"title" : "Recommended Gems",
"category" : "",
"tags" : "gems, ruby",
"url" : "/2016/03/recommended-gems/",
"date" : "2016-03-05 00:00:00 -0500"
} ,
{
"title" : "Looping through dictionaries in jinja2 templates",
"category" : "",
"tags" : "jinja2, ansible, templates",
"url" : "/2015/11/looping-through-dictionaries-in-jinja2-templates/",
"date" : "2015-11-05 05:10:32 -0500"
} ,
{
"title" : "Vagrant SSH Failure - Connection closed by remote host",
"category" : "",
"tags" : "",
"url" : "/2015/09/vagrant-ssh-failure-connection-closed-by-remote-host/",
"date" : "2015-09-10 22:21:52 -0400"
} ,
{
"title" : "Error when building PhantomJS 2.0",
"category" : "",
"tags" : "phantomjs",
"url" : "/2015/08/error-when-building-phantomjs-2-0/",
"date" : "2015-08-08 04:34:12 -0400"
} ,
{
"title" : "Setup Environment for Django Development",
"category" : "",
"tags" : "python, django, virtualenv, pip",
"url" : "/2015/02/setup-environment-for-django-development/",
"date" : "2015-02-02 03:26:28 -0500"
} ,
{
"title" : "Issues with RVM after upgrade to OS X Mavericks",
"category" : "",
"tags" : "",
"url" : "/2014/10/issues-with-rvm-after-upgrade-to-os-x-mavericks/",
"date" : "2014-10-02 00:00:00 -0400"
} ,
{
"title" : "Bypassing the AngularJS router for anchor tags",
"category" : "",
"tags" : "AngularJS, ngRoute, routing, anchor, CSV download",
"url" : "/2014/09/bypassing-the-angularjs-router-for-anchor-tags/",
"date" : "2014-09-18 22:19:44 -0400"
} ,
{
"title" : "Sharing Administrative Rights with Homebrew",
"category" : "",
"tags" : "homebrew, mac-osx, permissions",
"url" : "/2014/06/sharing-administrative-rights-with-homebrew/",
"date" : "2014-06-29 21:52:42 -0400"
} ,
{
"title" : "InstructureCon Hack Day",
"category" : "",
"tags" : "canvas, instructure, lti-integration",
"url" : "/2014/06/instructurecon-hack-day/",
"date" : "2014-06-17 22:40:05 -0400"
} ,
{
"title" : "Strong Parameters with Spree Extensions",
"category" : "",
"tags" : "Spree",
"url" : "/2014/04/strong-parameters-with-spree-extensions/",
"date" : "2014-04-20 04:05:29 -0400"
} ,
{
"title" : "Ruby Class Name",
"category" : "",
"tags" : "",
"url" : "/2014/03/ruby-class-name/",
"date" : "2014-03-21 00:32:23 -0400"
} ,
{
"title" : "Using 'for in' in Javascript",
"category" : "",
"tags" : "javascript, JsLint",
"url" : "/2014/03/using-for-in-in-javascript/",
"date" : "2014-03-18 05:04:24 -0400"
} ,
{
"title" : "How to 'head' a text file in Ruby",
"category" : "",
"tags" : "ruby, head",
"url" : "/2014/01/how-to-head-a-text-file-in-ruby/",
"date" : "2014-01-31 01:51:32 -0500"
} ,
{
"title" : "Objective C Notes",
"category" : "",
"tags" : "",
"url" : "/2014/01/objective-c-notes/",
"date" : "2014-01-09 10:46:44 -0500"
} ,
{
"title" : "Recommended Sublime 3 Packages",
"category" : "",
"tags" : "sublime text, lint",
"url" : "/2013/12/recommended-sublime-3-packages/",
"date" : "2013-12-17 20:39:00 -0500"
} ,
{
"title" : "Setting up PostgreSQL for Rails",
"category" : "",
"tags" : "postgresql",
"url" : "/2013/11/setting-up-postgresql-for-rails/",
"date" : "2013-11-21 23:14:34 -0500"
} ,
{
"title" : "ComputerName: not set",
"category" : "",
"tags" : "oh-my-zsh",
"url" : "/2013/10/computername-not-set/",
"date" : "2013-10-03 02:20:22 -0400"
} ,
{
"title" : "Bundler Definitions",
"category" : "",
"tags" : "bundler",
"url" : "/2013/09/bundler-definitions/",
"date" : "2013-09-01 23:16:38 -0400"
} ,
{
"title" : "Yard Documentation",
"category" : "",
"tags" : "Yard",
"url" : "/2013/09/yard-documentation/",
"date" : "2013-09-01 00:50:47 -0400"
} ,
{
"title" : "Exploring Bundler Commands",
"category" : "",
"tags" : "rubygems, bundler, Thor",
"url" : "/2013/08/exploring-bundler-commands/",
"date" : "2013-08-30 04:11:49 -0400"
} ,
{
"title" : "Paperclip URL and Path",
"category" : "",
"tags" : "paperclip, s3",
"url" : "/2013/08/paperclip-url-and-path/",
"date" : "2013-08-28 23:04:13 -0400"
} ,
{
"title" : "Open Source Ideas",
"category" : "",
"tags" : "",
"url" : "/2013/08/open-source-ideas/",
"date" : "2013-08-26 01:47:39 -0400"
} ,
{
"title" : "Precompiling Rails 4 Assets When Deploying to Heroku",
"category" : "",
"tags" : "heroku, rails4, asset pipeline",
"url" : "/2013/08/precompiling-rails4-assets-when-deploying-to-heroku/",
"date" : "2013-08-25 03:13:01 -0400"
} ,
{
"title" : "Resetting Paths for Homebrew",
"category" : "",
"tags" : "homebrew, command line, cmdline",
"url" : "/2013/08/resetting-paths-for-homebrew/",
"date" : "2013-08-22 22:31:38 -0400"
} ,
{
"title" : "Time Management",
"category" : "",
"tags" : "time management",
"url" : "/2013/07/time-management/",
"date" : "2013-07-30 08:25:04 -0400"
} ,
{
"title" : "Ruby Strftime",
"category" : "",
"tags" : "dates, times",
"url" : "/2013/07/ruby-strftime/",
"date" : "2013-07-27 01:16:42 -0400"
} ,
{
"title" : "Uptime Monitoring and Alerts",
"category" : "",
"tags" : "",
"url" : "/2013/07/uptime-monitoring-and-alerts/",
"date" : "2013-07-26 03:34:25 -0400"
} ,
{
"title" : "Installing Rails 3.2.13",
"category" : "",
"tags" : "Rails 3",
"url" : "/2013/07/installing-rails-3-2-13/",
"date" : "2013-07-16 06:00:40 -0400"
} ,
{
"title" : "POW RVM ZSH",
"category" : "",
"tags" : "rvm, pow, zsh",
"url" : "/2013/07/pow-rvm-zsh/",
"date" : "2013-07-15 03:07:38 -0400"
} ,
{
"title" : "Devise_For with Skip",
"category" : "",
"tags" : "devise",
"url" : "/2013/07/devise_for-with-skip/",
"date" : "2013-07-12 01:26:51 -0400"
} ,
{
"title" : "Project / Task Management Applications",
"category" : "",
"tags" : "planning, analysis, project management",
"url" : "/2013/07/project-task-management-applications/",
"date" : "2013-07-08 02:11:31 -0400"
} ,
{
"title" : "Refinery Extension Not Named After Model",
"category" : "",
"tags" : "refinery-cms",
"url" : "/2013/06/refinery-extension-not-named-after-model/",
"date" : "2013-06-18 09:01:25 -0400"
} ,
{
"title" : "Splitting a Branch with Git",
"category" : "",
"tags" : "git",
"url" : "/2013/06/splitting-a-branch-with-git/",
"date" : "2013-06-05 22:18:38 -0400"
} ,
{
"title" : "Application Builders",
"category" : "",
"tags" : "builder",
"url" : "/2013/06/application-builders/",
"date" : "2013-06-05 10:11:12 -0400"
} ,
{
"title" : "Technical Debt",
"category" : "",
"tags" : "technical debt",
"url" : "/2013/06/technical-debt/",
"date" : "2013-06-03 21:43:05 -0400"
} ,
{
"title" : "Why Ruby Was Named After a Gemstone",
"category" : "",
"tags" : "ruby, perl",
"url" : "/2013/06/why-ruby-was-named-after-a-gemstone/",
"date" : "2013-06-03 19:54:30 -0400"
} ,
{
"title" : "Downloadable Documentation",
"category" : "",
"tags" : "documentation",
"url" : "/2013/05/downloadable-documentation/",
"date" : "2013-05-30 21:20:00 -0400"
} ,
{
"title" : "History of Internationalization in Software",
"category" : "",
"tags" : "unicode",
"url" : "/2013/05/history-of-internationalization-in-software/",
"date" : "2013-05-29 21:09:34 -0400"
} ,
{
"title" : "Mobile Application Performance Monitoring and Management",
"category" : "",
"tags" : "mobile",
"url" : "/2013/05/mobile-application-performance-monitoring-and-management/",
"date" : "2013-05-28 22:45:57 -0400"
} ,
{
"title" : "Use Ruby to Develop iOS or Mac OSX",
"category" : "",
"tags" : "ios, mac-osx",
"url" : "/2013/05/use-ruby-to-develop-ios-or-mac-osx/",
"date" : "2013-05-22 04:53:31 -0400"
} ,
{
"title" : "Uninstalling Command Line Tools for Xcode",
"category" : "",
"tags" : "xcode",
"url" : "/2013/05/uninstalling-command-line-tools-for-xcode/",
"date" : "2013-05-20 23:51:42 -0400"
} ,
{
"title" : "Setting Rspec as the Default",
"category" : "",
"tags" : "generators, workflow",
"url" : "/2013/05/setting-rspec-as-the-default/",
"date" : "2013-05-19 08:19:49 -0400"
} ,
{
"title" : "Remote Pair Programming",
"category" : "",
"tags" : "pair programming, remote pair programming, coding presentation",
"url" : "/2013/04/remote-pair-programming/",
"date" : "2013-04-22 20:48:59 -0400"
} ,
{
"title" : "Languages Supported by Github Flavored Markdown",
"category" : "",
"tags" : "yardoc, github, markdown",
"url" : "/2013/04/languages-supported-by-github-flavored-markdown/",
"date" : "2013-04-12 23:57:57 -0400"
} ,
{
"title" : "Coding Games",
"category" : "",
"tags" : "javascript, code games",
"url" : "/2013/04/coding-games/",
"date" : "2013-04-11 23:30:59 -0400"
} ,
{
"title" : "Customize your IRB",
"category" : "",
"tags" : "irb",
"url" : "/2013/03/customize-your-irb/",
"date" : "2013-03-22 20:36:54 -0400"
} ,
{
"title" : "htaccess tester",
"category" : "",
"tags" : "htaccess",
"url" : "/2013/03/htaccess-tester/",
"date" : "2013-03-01 11:43:17 -0500"
} ,
{
"title" : "Using Find Each to Process Batches",
"category" : "",
"tags" : "batch processing",
"url" : "/2013/02/using-find-each-to-process-batches/",
"date" : "2013-02-28 23:49:14 -0500"
} ,
{
"title" : "Development Time",
"category" : "",
"tags" : "time, estimate",
"url" : "/2013/02/development-time/",
"date" : "2013-02-28 05:02:05 -0500"
} ,
{
"title" : "Minecraft Mods",
"category" : "",
"tags" : "minecraft, mods, craftbukkit, console",
"url" : "/2013/02/minecraft-mods/",
"date" : "2013-02-10 05:30:53 -0500"
} ,
{
"title" : "Obtain MySQL Query Statistics using Explain",
"category" : "",
"tags" : "mysql, explain",
"url" : "/2013/02/obtain-mysql-query-statistics-using-explain/",
"date" : "2013-02-07 03:56:31 -0500"
} ,
{
"title" : "Git Branching Model",
"category" : "",
"tags" : "",
"url" : "/2013/02/git-branching-model/",
"date" : "2013-02-06 22:57:45 -0500"
} ,
{
"title" : "Referencing Gem Source Code",
"category" : "",
"tags" : "gem, gem source, unpack",
"url" : "/2013/02/referencing-gem-source-code/",
"date" : "2013-02-06 08:11:46 -0500"
} ,
{
"title" : "You can be a programmer too!",
"category" : "",
"tags" : "",
"url" : "/2012/12/you-can-be-a-programmer-too/",
"date" : "2012-12-28 05:52:52 -0500"
} ,
{
"title" : "Spree Extension Development Environment using RVM",
"category" : "",
"tags" : "Spree, extension",
"url" : "/2012/12/spree-extension-development-environment-using-rvm/",
"date" : "2012-12-25 13:30:38 -0500"
} ,
{
"title" : "Creating a Gem",
"category" : "",
"tags" : "gem",
"url" : "/2012/12/creating-a-gem/",
"date" : "2012-12-25 12:50:03 -0500"
} ,
{
"title" : "Ruby File Modes",
"category" : "",
"tags" : "file",
"url" : "/2012/12/ruby-file-modes/",
"date" : "2012-12-21 04:14:20 -0500"
} ,
{
"title" : "Return FALSE or Raise Error?",
"category" : "",
"tags" : "ruby, exception handling",
"url" : "/2012/12/return-false-or-raise-error/",
"date" : "2012-12-20 21:35:44 -0500"
} ,
{
"title" : "When Testing Seems Pointless",
"category" : "",
"tags" : "testing, tdd, unit-testing",
"url" : "/2012/12/when-testing-seems-pointless/",
"date" : "2012-12-19 05:51:53 -0500"
} ,
{
"title" : "Using Rspec to Test Controllers",
"category" : "",
"tags" : "rspec, controller",
"url" : "/2012/12/using-rspec-to-test-controller/",
"date" : "2012-12-13 04:46:41 -0500"
} ,
{
"title" : "Good Guy Greg",
"category" : "",
"tags" : "testing, rspec",
"url" : "/2012/12/good-guy-greg/",
"date" : "2012-12-12 00:48:15 -0500"
} ,
{
"title" : "Using Rails 2.3.8",
"category" : "",
"tags" : "rails, Rails-2.3.8, bundler",
"url" : "/2012/12/using-rails-2-3-8/",
"date" : "2012-12-03 22:57:38 -0500"
} ,
{
"title" : "Rspec Executable Not Found",
"category" : "",
"tags" : "rvm, rspec",
"url" : "/2012/11/rspec-executable-not-found/",
"date" : "2012-11-28 04:14:27 -0500"
} ,
{
"title" : "Changing the Default Text Editor",
"category" : "",
"tags" : "git, text-editor",
"url" : "/2012/11/changing-the-default-text-editor/",
"date" : "2012-11-20 00:30:07 -0500"
} ,
{
"title" : "Metaclass",
"category" : "",
"tags" : "metaprogramming, metaclass",
"url" : "/2012/09/metaclass/",
"date" : "2012-09-19 22:19:03 -0400"
} ,
{
"title" : "Using Super with Ruby class methods",
"category" : "",
"tags" : "superclass",
"url" : "/2012/09/using-super-with-ruby-class-methods/",
"date" : "2012-09-19 00:06:32 -0400"
} ,
{
"title" : "Ruby Coloured Glasses",
"category" : "",
"tags" : "",
"url" : "/2012/09/ruby-coloured-glasses/",
"date" : "2012-09-18 22:46:48 -0400"
} ,
{
"title" : "Duplicate associated records when using FactoryGirl",
"category" : "",
"tags" : "factory_girl",
"url" : "/2012/09/issues-with-duplicate-associated-records-when-using-factorygirl/",
"date" : "2012-09-10 02:26:46 -0400"
} ,
{
"title" : "Finding Records without Specific Child in Many-to-Many Relationship",
"category" : "",
"tags" : "mysql, many-to-many",
"url" : "/2012/07/finding-records-without-specific-child-in-many-to-many-relationship/",
"date" : "2012-07-17 00:57:49 -0400"
} ,
{
"title" : "Listing Gems from Rails Console",
"category" : "",
"tags" : "",
"url" : "/2012/06/listing-gems-from-rails-console/",
"date" : "2012-06-21 00:59:03 -0400"
} ,
{
"title" : "Add a Serialized Hash Attribute to a Factory_Girl Definition",
"category" : "",
"tags" : "factory_girl, hash",
"url" : "/2012/06/add-a-serialized-hash-attribute-to-a-factory_girl-definition/",
"date" : "2012-06-11 23:25:13 -0400"
} ,
{
"title" : "List Sorted Methods in Ruby",
"category" : "",
"tags" : "",
"url" : "/2012/06/list-sorted-methods-in-ruby/",
"date" : "2012-06-11 21:06:48 -0400"
} ,
{
"title" : "Updating a Serialized Object from a Web form",
"category" : "",
"tags" : "serialize",
"url" : "/2012/06/updating-a-serialized-object-from-a-web-form/",
"date" : "2012-06-06 21:19:01 -0400"
} ,
{
"title" : "RSpec Controller Tests Receiving 'No route matches' Error",
"category" : "",
"tags" : "rspec, namespaced controller",
"url" : "/2012/05/rspec-controller-tests-receiving-no-route-matches-error/",
"date" : "2012-05-25 21:24:17 -0400"
} ,
{
"title" : "Cubase Installation Failure",
"category" : "",
"tags" : "cubase, package scripts, install failure",
"url" : "/2012/05/cubase-installation-failure/",
"date" : "2012-05-23 06:48:01 -0400"
} ,
{
"title" : "Generators Not Working in Rails 2.3.8",
"category" : "",
"tags" : "",
"url" : "/2012/05/generators-not-working-in-rails-2-3-8/",
"date" : "2012-05-16 04:14:15 -0400"
} ,
{
"title" : "Establishing New Ruby Environment in a Folder using RVM",
"category" : "",
"tags" : "rvm",
"url" : "/2012/05/establishing-new-ruby-environment-in-a-folder-using-rvm/",
"date" : "2012-05-15 20:58:16 -0400"
} ,
{
"title" : "History of the Canonical Gem Host for Ruby Gems",
"category" : "",
"tags" : "",
"url" : "/2012/05/history-of-the-canonical-gem-host-for-ruby-gems/",
"date" : "2012-05-14 21:41:23 -0400"
} ,
{
"title" : "Using Serialize Option with ActiveRecord Objects",
"category" : "",
"tags" : "rails3.1, forms",
"url" : "/2012/04/using-serialize-option-with-activerecord-objects/",
"date" : "2012-04-25 22:01:26 -0400"
} ,
{
"title" : "Save the Tests, Don't Throw Them Away",
"category" : "",
"tags" : "testing, tdd",
"url" : "/2012/04/save-the-tests-dont-throw-them-away/",
"date" : "2012-04-20 20:48:02 -0400"
} ,
{
"title" : "Factory Girl Associations and Records Persisting Across Tests",
"category" : "",
"tags" : "rails3.1, testing, factory_girl, tdd",
"url" : "/2012/04/factory-girl-associations-records-persisting-across-tests/",
"date" : "2012-04-12 01:32:23 -0400"
} ,
{
"title" : "Generating Test File Stubs for Existing Models, Views, and Controllers",
"category" : "",
"tags" : "rails3.1, testing, rspec, tdd",
"url" : "/2012/04/generating-rspec-tests-for-existing-models-views-controllers/",
"date" : "2012-04-03 16:10:12 -0400"
} ,
{
"title" : "Rails 3 and Subclasses Method",
"category" : "",
"tags" : "rails3.1",
"url" : "/2012/03/rails-3-and-subclasses-method/",
"date" : "2012-03-26 21:38:14 -0400"
} ,
{
"title" : "Locate and Updatedb with Homebrew",
"category" : "",
"tags" : "os x lion, findutils, homebrew",
"url" : "/2012/03/locate-and-updatedb-with-homebrew/",
"date" : "2012-03-26 17:08:59 -0400"
} ,
{
"title" : "Foreign Key References when Generating Model",
"category" : "",
"tags" : "migrations",
"url" : "/2012/03/foreign-key-references-when-generating-mode/",
"date" : "2012-03-23 21:15:26 -0400"
} ,
{
"title" : "Edit Devise User without Password",
"category" : "",
"tags" : "rails, devise",
"url" : "/2012/03/edit-devise-user-without-password/",
"date" : "2012-03-20 19:38:43 -0400"
} ,
{
"title" : "Factory Girl Not Generating Factories with Scaffold",
"category" : "",
"tags" : "rails3.1, factory_girl",
"url" : "/2012/03/factory-girl-not-generating-factories-with-scaffold/",
"date" : "2012-03-19 17:18:44 -0400"
} ,
{
"title" : "Ruby Comparison Operator =~",
"category" : "",
"tags" : "comparison operator, ruby",
"url" : "/2012/03/ruby-comparison-operator/",
"date" : "2012-03-07 21:26:34 -0500"
} ,
{
"title" : "Invalid Gemspec Error Regarding Invalid Date Format",
"category" : "",
"tags" : "rails3.1, rubygems",
"url" : "/2012/02/invalid-gemspec-error-invalid-date-format/",
"date" : "2012-02-22 21:30:45 -0500"
} ,
{
"title" : "Deleting Git Branches in Remote Repository",
"category" : "",
"tags" : "git",
"url" : "/2012/01/deleting-git-branches-in-remote-repository/",
"date" : "2012-01-24 21:08:06 -0500"
} ,
{
"title" : "Rails 3 on WHM / cPanel VPS Server",
"category" : "",
"tags" : "capistrano, passenger, rails3.1, cpanel",
"url" : "/2012/01/rails-3-on-whm-cpanel-vps-server/",
"date" : "2012-01-08 16:14:58 -0500"
} ,
{
"title" : "Configuring Rails 3.1.3 under Sub-URI",
"category" : "",
"tags" : "rails3.1, relative_url_root, sub-uri",
"url" : "/2012/01/configuring-rails-3-1-3-under-sub-uri/",
"date" : "2012-01-04 05:02:40 -0500"
} ,
{
"title" : "Custom Rake Tasks Not Loading",
"category" : "",
"tags" : "rake",
"url" : "/2012/01/custom-rake-tasks-not-loading/",
"date" : "2012-01-01 21:57:34 -0500"
} ,
{
"title" : "Troubleshooting ActiveResource Requests",
"category" : "",
"tags" : "ActiveResource, HighRise, REST API",
"url" : "/2012/01/troubleshooting-activeresource-requests/",
"date" : "2012-01-01 21:46:52 -0500"
} ,
{
"title" : "Example Rake Task",
"category" : "",
"tags" : "rake",
"url" : "/2012/01/example-rake-task/",
"date" : "2012-01-01 21:26:28 -0500"
} ,
{
"title" : "Adding a New User in Ubuntu",
"category" : "",
"tags" : "",
"url" : "/2011/12/adding-a-new-user-in-ubuntu/",
"date" : "2011-12-29 21:08:48 -0500"
} ,
{
"title" : "Resolving issues with Namespaced Models in Rails 3.1.0",
"category" : "",
"tags" : "rails, namespaced models, rails3.1",
"url" : "/2011/12/resolving-issues-with-namespaced-models-in-rails-3-1-0/",
"date" : "2011-12-06 22:09:51 -0500"
} ,
{
"title" : "Paperclip error with non-image file",
"category" : "",
"tags" : "paperclip, identify, imagemagick, non-image",
"url" : "/2011/12/paperclip-error-with-non-image-file/",
"date" : "2011-12-05 19:43:25 -0500"
} ,
{
"title" : "Issues with RVM",
"category" : "",
"tags" : "rails, os x lion, rvm",
"url" : "/2011/11/issues-with-rvm/",
"date" : "2011-11-13 23:11:04 -0500"
} ,
{
"title" : "Setting up Ubuntu for Rails App via Passenger",
"category" : "",
"tags" : "",
"url" : "/2011/10/setting-up-ubuntu-for-rails-app-via-passenger/",
"date" : "2011-10-30 01:11:11 -0400"
} ,
{
"title" : "Formtastic use of semantic_form_remote_for",
"category" : "",
"tags" : "",
"url" : "/2011/10/formtastic-use-of-semantic-form-remote-for/",
"date" : "2011-10-20 18:29:02 -0400"
} ,
{
"title" : "Exporting Routes in Rails 3",
"category" : "",
"tags" : "",
"url" : "/2011/10/exporting-routes-rails/",
"date" : "2011-10-20 17:36:54 -0400"
} ,
{
"title" : "Using URL Helpers in Models or Rake Tasks",
"category" : "",
"tags" : "",
"url" : "/2011/10/using-url-helpers-models-or-rake-tasks/",
"date" : "2011-10-19 19:41:51 -0400"
} ,
{
"title" : "Building a Query String from a Hash with Rails 3",
"category" : "",
"tags" : "",
"url" : "/2011/10/building-query-string-from-hash-rails/",
"date" : "2011-10-19 17:35:09 -0400"
} ,
{
"title" : "Rails 3 Autoloading with Namespaced Models",
"category" : "",
"tags" : "",
"url" : "/2011/10/rails-autoloading-namespaced-models/",
"date" : "2011-10-18 18:50:03 -0400"
} ,
{
"title" : "Form Fields not Displaying with Formtastic",
"category" : "",
"tags" : "",
"url" : "/2011/10/form-fields-not-displaying-formtastic/",
"date" : "2011-10-18 13:48:34 -0400"
} ,
{
"title" : "Adding Event Listeners to Google-Maps-for-Rails Markers",
"category" : "",
"tags" : "",
"url" : "/2011/10/adding-event-listeners-to-google-maps-for-rails-markers/",
"date" : "2011-10-11 14:24:27 -0400"
} ,
{
"title" : "Issues with Bluetooth in OS X Lion after Upgrade",
"category" : "",
"tags" : "",
"url" : "/2011/09/issues-with-bluetooth-in-os-x-lion-after-upgrade/",
"date" : "2011-09-28 22:08:39 -0400"
} ,
{
"title" : "Redirect_to not working",
"category" : "",
"tags" : "http post",
"url" : "/2011/09/redirect_to-not-working/",
"date" : "2011-09-27 19:37:59 -0400"
} ,
{
"title" : "Advanced Use of Will_Paginate",
"category" : "",
"tags" : "rails, pagination, will_paginate",
"url" : "/2011/09/advanced-use-of-will_paginate/",
"date" : "2011-09-23 21:08:56 -0400"
} ,
{
"title" : "Getting File object for Paperclip Attachment via S3",
"category" : "",
"tags" : "rails, csv, paperclip, s3",
"url" : "/2011/09/getting-file-object-for-paperclip-attachment-via-s3/",
"date" : "2011-09-22 18:34:10 -0400"
} ,
{
"title" : "Issues with MacPorts After Upgrading to OS X Lion",
"category" : "",
"tags" : "macports",
"url" : "/2011/09/issues-with-macports-after-upgrading-to-os-x-lion/",
"date" : "2011-09-16 21:10:00 -0400"
} ,
{
"title" : "Error: 'unintitialized constant MySQL' with Rails 3 on Snow Leopard Mac",
"category" : "",
"tags" : "",
"url" : "/2011/05/error-unintitialized-constant-mysql-with-rails-3-on-snow-leopard-mac/",
"date" : "2011-05-20 02:05:08 -0400"
} ,
{
"title" : "Installing PHPdoc for Ubuntu for use with Command Line",
"category" : "",
"tags" : "",
"url" : "/2011/05/installing-phpdoc-for-ubuntu/",
"date" : "2011-05-18 23:21:45 -0400"
} ,
{
"title" : "Ruby on Rails session - Access from PHP",
"category" : "",
"tags" : "",
"url" : "/2010/12/ruby-on-rails-session-access-from-php/",
"date" : "2010-12-14 03:25:37 -0500"
} ,
{
"title" : "Obtaining Request Domain Name for Ruby on Rails",
"category" : "",
"tags" : "",
"url" : "/2010/10/obtaining-request-domain-name-for-ruby-on-rails/",
"date" : "2010-10-12 04:19:16 -0400"
} ,
{
"title" : "Changing Column Order via ActiveRecord Migration",
"category" : "",
"tags" : "",
"url" : "/2010/08/changing-column-order-via-activerecord-migration/",
"date" : "2010-08-16 01:09:40 -0400"
} ,
{
"title" : "Rails Performance Statistics",
"category" : "",
"tags" : "",
"url" : "/2010/08/rails-performance-statistics/",
"date" : "2010-08-09 05:26:40 -0400"
} ,
{
"title" : "RailRoad Gem",
"category" : "",
"tags" : "diagram, models, activerecord",
"url" : "/2010/08/railroad/",
"date" : "2010-08-09 05:07:44 -0400"
} ,
{
"title" : "Annotate Models",
"category" : "",
"tags" : "rails, plugin",
"url" : "/2010/08/annotate-models/",
"date" : "2010-08-08 01:02:30 -0400"
} ,
{
"title" : "Selenium RC, Firefox 3, and Ubuntu",
"category" : "",
"tags" : "linkedin",
"url" : "/2010/07/selenium-rc-firefox-3-and-ubuntu/",
"date" : "2010-07-29 21:46:26 -0400"
} ,
{
"title" : "Undefined method 'ref' for ActiveSupport::Dependencies:Module",
"category" : "",
"tags" : "",
"url" : "/2010/07/unable-to-run-rails-migrations-mysql-gem-on-snow-leopard/",
"date" : "2010-07-27 13:19:01 -0400"
} ,
{
"title" : "Setting up Deployment for Rails using Capistrano, Apache with Passenger and Git",
"category" : "",
"tags" : "rails, capistrano, git, passenger",
"url" : "/2010/07/setting-up-deployment-for-rails-using-capistrano-apache-with-passenger-and-git/",
"date" : "2010-07-21 02:48:23 -0400"
} ,
{
"title" : "Rake Tasks",
"category" : "",
"tags" : "",
"url" : "/2010/07/rake-tasks/",
"date" : "2010-07-16 20:37:56 -0400"
} ,
{
"title" : "MySQL Gem Installation on Mac 10.5.8 - 64 bit??",
"category" : "",
"tags" : "",
"url" : "/2010/07/mysql-gem-installation-on-mac-10-5-8-64-bit/",
"date" : "2010-07-16 20:26:14 -0400"
} ,
{
"title" : "Wordpress Plugin - Custom Pages?",
"category" : "",
"tags" : "wordpress, plugin, permalinks",
"url" : "/2010/06/wordpress-plugin-custom-pages/",
"date" : "2010-06-29 01:39:15 -0400"
} ,
{
"title" : "Ubuntu 9.10 Karmic Koala - VNC resolution limited without monitor",
"category" : "",
"tags" : "",
"url" : "/2010/02/ubuntu-9-10-karmic-koala-vnc-resolution/",
"date" : "2010-02-20 10:29:28 -0500"
} ,
{
"title" : "PHP Not Parsing on Debian / Ubuntu server with Apache2",
"category" : "",
"tags" : "",
"url" : "/2010/02/php-not-parsing-on-debian-ubuntu-server/",
"date" : "2010-02-11 00:03:59 -0500"
} ,
{
"title" : "Dell Dimension 3000 - Audio is Choppy",
"category" : "",
"tags" : "",
"url" : "/2010/01/dell-dimension-3000-audio-is-choppy/",
"date" : "2010-01-13 21:49:45 -0500"
} ,
{
"title" : "Un-Hide Someone in Facebook",
"category" : "",
"tags" : "",
"url" : "/2010/01/un-hide-someone-in-facebook/",
"date" : "2010-01-11 20:40:15 -0500"
} ,
{
"title" : "Selenium - no display specified",
"category" : "",
"tags" : "",
"url" : "/2010/01/selenium-no-display-specified/",
"date" : "2010-01-11 17:51:10 -0500"
} ,
{
"title" : "Common Computer Mistakes",
"category" : "",
"tags" : "",
"url" : "/2009/11/common-computer-mistakes/",
"date" : "2009-11-02 17:59:02 -0500"
} ,
{
"title" : "Rounded Corners",
"category" : "",
"tags" : "",
"url" : "/2009/04/rounded-corners/",
"date" : "2009-04-24 20:11:18 -0400"
} ,
{
"title" : "Database Schema Information",
"category" : "",
"tags" : "mysql, rails",
"url" : "/2009/03/database-schema-information/",
"date" : "2009-03-13 20:20:50 -0400"
} ,
{
"title" : "PHP Compilation",
"category" : "",
"tags" : "php",
"url" : "/2005/02/php-compile/",
"date" : "2005-02-28 05:22:01 -0500"
} ,
{
"title" : "MBox and Linux Test Server",
"category" : "",
"tags" : "mbox, linux",
"url" : "/2005/02/mbox-and-linux-test-server/",
"date" : "2005-02-23 18:25:01 -0500"
} ,
{
"title" : "PHP/MySQL Bug Tracking",
"category" : "",
"tags" : "bug tracking",
"url" : "/2004/09/bug-tracking/",
"date" : "2004-09-09 16:51:00 -0400"
}
]
You need to place the following code within the layout where you want the search to appear. (See the configuration section below to customize it)
For example in _layouts/default.html:
<!-- Html Elements for Search -->
<div id="search-container">
<input type="text" id="search-input" placeholder="search...">
<ul id="results-container"></ul>
</div>
<!-- Script pointing to jekyll-search.js -->
<script src="/bower_components/simple-jekyll-search/dest/jekyll-search.js" type="text/javascript"></script>
Configuration
Customize SimpleJekyllSearch by passing in your configuration options:
SimpleJekyllSearch({
searchInput: document.getElementById('search-input'),
resultsContainer: document.getElementById('results-container'),
json: '/search.json',
})
searchInput (Element) [required]
The input element on which the plugin should listen for keyboard event and trigger the searching and rendering for articles.
resultsContainer (Element) [required]
The container element in which the search results should be rendered in. Typically an <ul>
.
json (String|JSON) [required]
You can either pass in an URL to the search.json
file, or the results in form of JSON directly, to save one round trip to get the data.
searchResultTemplate (String) [optional]
The template of a single rendered search result.
The templating syntax is very simple: You just enclose the properties you want to replace with curly braces.
E.g.
The template
<li><a href="{url}">{title}</a></li>
will render to the following
<li><a href="/jekyll/update/2014/11/01/welcome-to-jekyll.html">Welcome to Jekyll!</a></li>
If the search.json
contains this data
[
{
"title" : "Welcome to Jekyll!",
"category" : "",
"tags" : "",
"url" : "/jekyll/update/2014/11/01/welcome-to-jekyll.html",
"date" : "2014-11-01 21:07:22 +0100"
}
]
templateMiddleware (Function) [optional]
A function that will be called whenever a match in the template is found.
It gets passed the current property name, property value, and the template.
If the function returns a non-undefined value, it gets replaced in the template.
This can be potentially useful for manipulating URLs etc.
Example:
SimpleJekyllSearch({
...
middleware: function(prop, value, template){
if( prop === 'bar' ){
return value.replace(/^\//, '')
}
}
...
})
See the tests for an in-depth code example
noResultsText (String) [optional]
The HTML that will be shown if the query didn’t match anything.
limit (Number) [optional]
You can limit the number of posts rendered on the page.
fuzzy (Boolean) [optional]
Enable fuzzy search to allow less restrictive matching.
exclude (Array) [optional]
Pass in a list of terms you want to exclude (terms will be matched against a regex, so urls, words are allowed).
Enabling full-text search
Replace ‘search.json’ with the following code:
---
layout: null
---
[
{
"title" : "MIDI MSB/LSB Explained",
"category" : "",
"tags" : "midi",
"url" : "/2024/03/midi-msb-lsb-explained/",
"date" : "2024-03-18 01:34:00 -0400",
"content" : "I was trying to understand MIDI better, so that I know the difference betweennote messages, controller change messages (CC), and System Exclusive (SysEx).Ultimately my goal is to better understand and work with MIDI and MIDI devicesin Cubase.I ended up coming across references to Most Significant Byte (MSB) and LeastSignificant Byte (LSB), which seems related to Bit Numbering. HoweverI’m seeing MSB referred to as Most Significant Byte (not Bit), and LSB referredto as Least Significant Byte (not Bit).I tried to get an explanation to contextualize what this means in the contextof MIDI controller change messages, but didn’t find much that was really clear,other than this article -Changing patches over MIDI using Bank Select Controller.ExplainedHere is how I would explain it to someone.MIDI LimitationsWhen the MIDI specification was first developed, it wasn’t foreseen that anyonewould need MIDI control change messages to have a value in a range greater than0 - 127.“Who would need more than 128 different patches/programs to choose from?”“Who would need a resolution of more than 128 for the instrument’s volume?”By the way, MIDI refers to the different patches, or instruments, supported bya device as “programs”.Because of this, MIDIs design does not allow you to send a value higher than 128in a single message. Remember that 8 bits of binary can represent values0 - 255, so MIDI limited values to 7 bits (0 - 127).Overcoming the LimitationsWhen sound modules came out with more than 128 programs, manufacturerstried to overcome this limitation by organizing the programs into “banks”. Byusing a single Controller Change message to specify the bank, you couldhave 128 banks multiplied by 128 programs each, for a total of16384 programs you can switch to.“Who would need more than 16384 programs?”. At this point I think they didn’twant to limit systems again, so the MIDI specifications were updated toaccomodate any future needs.14 Bits of ResolutionThis is where the MSB/LSB scheme comes in.For situations where Control Change messages might need to specify a value withmuch higher resolution (more than 0 - 127), they decided to create pairs ofmessages that each would send a value between 0 - 127. Each value is 7 bits,for a combination of 14 bits, and thus a value range of 0 - 16383.The first value, is called the Most Significant Byte (MSB). The second value,which is also 7 bits long, is called the Least Significant Byte (LSB). Thisterminology simply communicates that the first value is more significant thanthe second value in determining the ultimate value derrived from both combined.If you actually look at the Control Change messages that are supported forselecting the Program Bank, Control Change message number 0 (‘CC#0’), is the“Bank Select MSB” value. Control Change message Number 32 (CC#32) is designatedas the “Bank Select LSB” value.This means that you can specify up to 16384 banks, each including 128 programs,for a total of 2,097,152 programs that can be specified by sending 3 messages: Bank Select MSB Bank Select LSB Program ChangeWho could possibly need more than over 2 million program changes, right?Other Control ChangesThis story makes the most sense in terms of Banks and Program changes, but italso applies to other Control Changes. All the original control change messagesdesignated for Modulation Wheel, Breath Controller, Foot Controller, Volume,Balance, Pan, etc. have equivalent “LSB” messages designated to increase theresolution of their values if needed.You can see them all defined in MIDI 1.0 Control Change Messages (Data Bytes)"
} ,
{
"title" : "Playing Video in your Subaru Starlink System",
"category" : "",
"tags" : "subaru, starlink, video, photos",
"url" : "/2023/09/subaru-starlink-video-photos/",
"date" : "2023-09-16 17:53:00 -0400",
"content" : "If you own a Subaru Outback or Legacy released in 2015 - 2018 that includesthe Starlink system (the one without Apple Carplay), then you might not knowthat you can do the following cool things with a USB drive or SD card. View a slideshow of images Upload an image to display when you turn the “screen off” Sit back and watch videos while the car is parked (with parking break on)You can find details on how this is done in the2017 Legacy and Outback Subaru Starlink Owners Manual, but I’m here toextract that and tell you what to do.Note: Don’t add files to the navigation/map system SD card, as this will breakthe navigation.General The USB thumb drive or SD card should be formatted as FAT 16 or FAT 32 Maximum number of folders: 3000 Maximum number of files: 9999 Maximum files per a folder: 255 MP3/WMA/AAC files in folders up to 8 levels deep can be played. However, thestart of playback may be delayed when using discs containing numerous levelsof folders. For this reason, we recommend creating discs with no more than 2levels of folders.MusicThe player relies on ID3 tags included in the files to identify the titleof the artists, album, track, cover art, etc. If you import CDs or MP3 files youbuy on Bandcamp into the Apple Music app, you edit this metadata for your music,and even apply cover artwork.The manual states that the Starlink system supports streaming audio from iPoddevices (classic, iPod Touch 1st - 5th gen, iPod nano 1st - 7th gen, iPhone1 - 5). I’m using an iPhone 14 Pro and it works fine.Compatible Files Types: MP3 - ‘.mp3’ extension WMA - ‘.wma’ extension AAC - ‘.m4a’ extensionVideoEnsure to engage the parking brake when watching video content, otherwise youwill only get a blue video thumbnail.iPod video is not supported.The following video codecs are supported: WMV9 - WMV or AVI file type MPEG4 - MPEG4 or AVI file type H.264/AVC - MPEG4 or AVI file typeThe following video resolutions are supported: Low definition television 128×96 (MMS-Small - Lowest size recommended for use with 3GPP video) 160×120 (QQVGA - Lowest commonly used video resolution) 176×144 (QCIF Webcam) 320×240 (QVGA, NTSC square pixel) 352×240 (SIF (525) - NTSC-standard VCD / super-long-play DVD) 352×288 (CIF / SIF (625) - PAL-standard VCD / super-long-play DVD) 480p 640×480 (480p - 4:3 ratio) 720×480 (480p - 3:2 ratio) 576p 720×576 (considered standard definition for PAL regions) If you are using a Mac, I recommend using Handbrake to convert your videos.I used the “Fast 480p30” and “Fast 576p25” presets in Handbrake to convertvideos and they worked just fine.Images The compatible photo file extensions are JPG and JPEG.SlideshowYou can view a slideshow of images if you are parked with the emergency parkingbreak engaged. From the “INFO” menu the USB drive or SD card will show up asan option, select that option. A menu will display (e.g. “USB Photo”) withoptions shown to display a Slide Show, with Play Time (Fast, Normal Slow) andPlay Mode (Normal, Random) options. Image files can be viewed at the same time that audio files are being playedback. Ensure to engage the parking brake when watching video content. If not, only ablue screen will be displayed. Audio, however, can be heard normally.Screen OffYou can set an image as the screen off images by uploading images from a USBdrive or SD card. When saving the images to a USB or SD card, make sure to place them in an‘Image’ folder located in the root folder of the drive. If this folder nameis not used, the system cannot download the images. The folder nameis case sensitive. Go to Home > Settings > General, then tap “Customize Screen Off Image” After you’ll uploaded the images to your system, you can go to Home >Settings > Screen Off to make your console display the “Screen Off” image."
} ,
{
"title" : "Generating Jekyll Posts from an External Source",
"category" : "",
"tags" : "jekyll, generators, posts, cms",
"url" : "/2023/09/generating-jekyll-posts-from-an-external-source/",
"date" : "2023-09-16 17:31:00 -0400",
"content" : "I want to use a headless CMS with Jekyll as the source of my blog posts.There aren’t many plugins that aim to faciliate this.There is a WordPress jekyll-import tool, but this is intended for a onetime import of Wordpress content to Markdown files inside of your Jekyllproject, not a continual build process that sources all content from an API.The Jekyll EngineJekyll Posts are just a natively supported form of Jekyll collection. Thedocumentation for Jekyll even states that if you configure your “collections” toload from a different directory, you will need to move your _posts and_drafts folder under that directory as well.Jekyll builds your site through a process that involves the following steps: Read - Reads data from directories/files into the Jekyll::Site object Generate - Runs each of the Generators defined by plugins you’ve installed orcoded yourself Render - Renders content in memory (markdown converted to HTML, SASS convertedto CSS, etc.) Cleanup - Removes orphaned files and empty directories in destination Write - Writes static files, pages, and posts to build directory(e.g. _site)The entire process is about “reading” data from files into Ruby objects that arestored inside of the Jekyll::Site object. This includes pages, posts,collections, and data.After this it runs the generators defined by Jekyll plugins you install, or thatyou write yourself.Next it goes through a rendering process, where the content loaded from Markdownfiles is converted to HTML. For instance, inside of each Jeyll::Documentobject used to represent each blog post, the Markdown is stored to the‘content’ attribute, but the rendered HTML is stored in the ‘output’ attribute.The cleanup step performs the necessary file cleaning up in the site destinationdirectory.Lastly all the rendered data is then written to actual files under thedestination directory (e.g. defaults to _site).Here’s how I inspected objects in memory through each step of the site buildingprocess using IRB.require 'jekyll'options = { "source" => File.expand_path("."), "destination" => File.expand_path("./_site"), "incremental" => true, "profile" => true, "watch" => true, "serving" => true,}# merge build options with configuration dataoptions = Jekyll.configuration(options)# initialize the site objectsite = Jekyll::Site.new(options)site.class# => Jekyll::Site# initialize attribute defaultssite.reset# read data from directories/filessite.read# inspect posts collectionsite.collections['posts'].class# => Jekyll::Collectionsite.collections['posts'].docs.count# => 163example_doc = site.collections['posts'].docs[0]# => #<Jekyll::Document _posts/2004-09-09-bug-tracking.md collection=posts>example_doc.path# => "/Developer/redconfetti.github.io/_posts/2004-09-09-bug-tracking.md"example_doc.type# => :postsexample_doc.data# => {# "draft"=>false,# "categories"=>["php"],# "layout"=>"post",# "published"=>true,# "title"=>"PHP/MySQL Bug Tracking",# "author"=>"maxwell keyes",# "date"=>2004-09-09 16:51:00 -0400,# "comments"=>true,# "tags"=>["bug tracking"],# "slug"=>"bug-tracking",# "ext"=>".md",# "excerpt"=><Jekyll::Excerpt id=/2004/09/bug-tracking#excerpt># }example_doc.data["permalink"]# => nil# Content is the markdown stringexample_doc.content# => "For anyone who needs a free web based Bug Tracking system programmed# using\nPHP/MySQL, check out Flyspray.\n"example_doc.output# => nil# After file content is loaded into Jekyll::Site, it is rendered from Markdown# to actual HTML using site.rendersite.render# => nilexample_doc.output# => "<!DOCTYPE html>\n<html>\n <head>\n <meta charset=\"utf-8\">\n # <meta name=\"viewport\" content=\"width=device-width initial-scale=1\" />\n# <meta http-equiv=\"X-UA-Compatible\" content=\"IE=edge\">\n\n# <title>PHP/MySQL Bug Tracking</title># ...# <footer class=\"footer\">\n <span class=\"footer__copyright\"># &copy; 2023 Jason Miller. All rights reserved.</span>\n</footer>\n\n# </body>\n</html>\n"site.cleanupsite.writeCustom Post ApproachI tried to write a plugin/generator for Jekyll that used a class that inheritsfrom Jekyll::Document, and patches various methods so that it can be usedwithout sourcing data from a Markdown file under the _posts directory. I wasnot able to get this to work without errors and complications.Instead, it’s better that you create a custom page template, as suggested bythe Jekyll Generators documentation, with a generator that locates thecustom page by name, and simply adds the custom data under the pages datahash attribute.Liquid DropsThe only complication that could not be avoided is that theLiquid templating system used by Jekyll will raise errors if you usecustom defined objects in your page template(.e.g undefined method 'to_liquid').If the objects you are iterating over and injecting into your page are notone of the basic Ruby types, then you’ll need to make sure the objectsyou’re iterating over inherit from Liquid::Drop.See Liquid DropsWordpress RSS Feed ExampleHere’s an example of code needed for a simple Jekyll generator that canretrieve posts from an RSS/XML feed hosted under Wordpress.com.########################################### Gemfile# XML to Hash translator# https://github.com/savonrb/norigem 'nori', '~> 2.6'# Nokogiri# https://github.com/sparklemotion/nokogirigem 'nokogiri', '~> 1.15'# Backport Jekyll Sass Converter to avoid deprecation warnings gem 'jekyll-sass-converter', '~> 2.2'########################################### _config.ymlwp_posts_page: title: 'Blog' layout: 'wp_posts_page' feed_url: 'https://redconfetti.wordpress.com/feed/'########################################### _plugins/wordpress_posts.rbrequire "net/http"require "uri"require "nori"module WordpressPosts class Generator < Jekyll::Generator def generate(site) @post_page_config = site.config['wp_posts_page'] raise 'Missing Wordpress configuration in _config.yml' unless @post_page_config page_layout = @post_page_config['layout'] page_title = @post_page_config['title'] page_slug = page_title.strip .downcase .gsub(/[\s\.\/\\]/, '-') .gsub(/[^\w-]/, '') .gsub(/[-_]{2,}/, '-') .gsub(/^[-_]/, '') .gsub(/[-_]$/, '') feed_url = @post_page_config['feed_url'] post_feed = WordpressFeed.new(feed_url) # get template posts_page = site.pages.find { |page| page.name == 'wp_posts_page.html'} posts_page.data['post_feed'] = post_feed.items end endendclass WordpressFeed attr_accessor :rss_url def initialize(rss_url) self.rss_url = rss_url end def rss_channel @channel ||= begin rss = hash_data['rss'] || {} rss['channel'] end end def items @item ||= begin [rss_channel['item']].flatten.collect do |item| ItemDrop.new(item) end end end private def hash_data @hash_data ||= begin if !xml_string.blank? return Nori.new.parse(xml_string) end end end def xml_string @xml_string ||= begin uri = URI(rss_url) Net::HTTP.get(uri) end end class ItemDrop < Liquid::Drop attr_accessor :feed_item def initialize(feed_item) self.feed_item = feed_item || {} end def content feed_item['content:encoded'] end def title feed_item['title'] end endendPage template (_layouts/wp_posts_page.html)---layout: pagetitle: Blogpermalink: /blog/---<ul style="list-style-type: none; padding: 0; margin: 0;"> </ul>StoryblokThe above approach isn’t very ideal due to how Wordpress.com embeds oversizedimages into the posts. Also the free Wordpress hosting account limits thefeed to 350 posts.Hopefully the above example provides you with enough understanding to obtaincontent from any external data source and embed it into your custom Jekyllpages.If you want to go with a free CMS for your own site, consider Storyblok.The article, Add a headless CMS to Jekyll, gives good instructions onhow to use the Storyblok Ruby Gem with Jekyll. With Storyblok you candefine your own schema for the objects you’re embedding in your pages,and nest the objects inside of the content of other objects."
} ,
{
"title" : "RubyGems SSL Error with jRuby",
"category" : "",
"tags" : "jruby, rubygems, bundler, ssl",
"url" : "/2023/03/jruby-bundler-unrecognized-ssl-message/",
"date" : "2023-03-22 18:08:51 -0400",
"content" : "I spent several days investigating an error that was coming up with our Railsapplication build using jRuby v9.3.3 - v9.3.10. Everytime the build wouldtry to run bundle install we would get the following error.Fetching source index from https://rubygems.org/Retrying fetcher due to error (2/4): Bundler::HTTPError Could not fetch specs from https://rubygems.org/ due to underlying error <Unrecognized SSL message, plaintext connection? (https://rubygems.org/specs.4.8.gz)>Retrying fetcher due to error (3/4): Bundler::HTTPError Could not fetch specs from https://rubygems.org/ due to underlying error <Unrecognized SSL message, plaintext connection? (https://rubygems.org/specs.4.8.gz)>Retrying fetcher due to error (4/4): Bundler::HTTPError Could not fetch specs from https://rubygems.org/ due to underlying error <Unrecognized SSL message, plaintext connection? (https://rubygems.org/specs.4.8.gz)>Could not fetch specs from https://rubygems.org/ due to underlying error<Unrecognized SSL message, plaintext connection?(https://rubygems.org/specs.4.8.gz)>ERROR: bundle install failedYou will notice it attempts 3 times, this is because the script is actuallyrunning:bundle install --local --retry 3 || { echo "WARNING: bundle install --local failed, running bundle install"; bundle install --retry 3 || { echo "ERROR: bundle install failed"; exit 1; } }When we would run our script from the command line, it would work just fine.When we would try to run it from our Bamboo application (used for ContinuousIntegration), it would fail with the above errors.We’re using RVM to manage the different Ruby versions needed by our applicationson the Bamboo server. We thought it might be an issue with Bundler, but itturned out that one of our JAVA configurations was causing this issue.We noticed that JAVA_OPTS was not set when our admin ran the build scriptsfrom the command line, but with the Bamboo job it was set, and it includedthe -Djava.security.properties option pointing to a file that used thefollowing:jdk.tls.disabledAlgorithms=SSLv3, RC4, DES, MD5withRSA, DH keySize < 1024, \ EC keySize < 224, 3DES_EDE_CBC, anon, NULL, RSASSA-PSSWhen we added this to the export for JAVA_OPTS, the issue occurred when runningthe build scripts, or when running the RubyGems troubleshooting script(curl -sL https://git.io/vQhWq | ruby). When we removed it, the errorreturned."
} ,
{
"title" : "Long time nuclear waste warning messages",
"category" : "",
"tags" : "nuclear, waste, warning",
"url" : "/2021/12/long-time-nuclear-waste-warning-messages/",
"date" : "2021-12-09 15:37:18 -0500",
"content" : "I wanted to apply the robotic voice synthesis toLong-time nuclear waste warning messages.say -v fred "This place is a message"say -v fred "and part of a system of messages"say -v fred "pay attention to it\!"say -v fred "Sending this message was important to us."say -v fred "We considered ourselves to be a powerful culture."say -v fred "This place is not a place of honor."say -v fred "no highly esteemed deed is commemorated here."say -v fred "nothing valued is here."say -v fred "What is here was dangerous and repulsive to us."say -v fred "This message is a warning about danger."say -v fred "The danger is in a particular location."say -v fred "it increases towards a center."say -v fred "the center of danger is here."say -v fred "of a particular size and shape, and below us."say -v fred "The danger is still present, in your time, as it was in ours."say -v fred "The danger is to the body, and it can kill."say -v fred "The form of the danger is an emanation of energy."say -v fred "The danger is unleashed only if you substantially disturb this place physically."say -v fred "This place is best shunned and left uninhabited."See also Warning messages for future humans"
} ,
{
"title" : "Updating RBEnv on Raspberry Pi",
"category" : "",
"tags" : "raspberrypi, rbenv",
"url" : "/2021/06/updating-rbenv-raspberry-pi/",
"date" : "2021-06-14 10:19:55 -0400",
"content" : "Here’s a modified version of instructions provided by Yosei Ito. I usedapt remove instead of apt uninstall.sudo apt remove ruby-build$ mkdir -p "$(rbenv root)"/plugins$ git clone https://github.com/rbenv/ruby-build.git "$(rbenv root)"/plugins/ruby-buildThis worked very well for me, helping me to install Ruby 2.7.3, and installthe gem dependencies for this website without any errors or need for using‘sudo’ when installing gems via Bundler."
} ,
{
"title" : "Generating a New Rails Project",
"category" : "",
"tags" : "api, webpacker, editorconfig",
"url" : "/2020/12/generating-a-new-rails-project/",
"date" : "2020-12-10 10:38:55 -0500",
"content" : "The great thing about working with a framework like Ruby on Rails is how youcan pick and choose the components that make up your application. Developersoften have preferences concerning solutions or tools they use in their projects.They have the choice of testing tools such as Rspec or Cucumber, testingfactory libraries such as FactoryBot or Fabrication, authenticationsolutions such as Devise or AuthLogic, and authorization solutions such asCanCan or Pundit.It can feel heavy having to reconfigure a freshly generated Rails applicationto use these different components, because you spend most of your timeconfiguring the new application rather than jumping into actually buildingthe functionality.Application TemplatesThankfully Rails supports Application Templates that make configuring a newapplication a breeze.It’s possible for you to configure your own custom template with your primarypreferences all in one file. You could maintain your own template in aGithub Gist using this option.See my example template.App Template CommandEven better than this is that you can choose to apply certain configurationsto your application ala carte by using the ‘app:template’ rails command. Thiscommand allows you to run template fragments from publicly hosted templatescripts. You can find many scripts for various solutions on Rails Bytes.My RecipeI prefer to use tools provided by the JavaScript / ECMAScript eco-systemfacilitated by NodeJS packages for my front-end, using Webpack. Ruby on Railsapplications get support for this from Webpacker. I like ReactJS, but Iprefer VueJS for my own personal projects.Because I use Webpack as my asset pipeline for the front-end application andassets, I don’t need the Rails asset-pipeline or page rendering helpers. Thiscauses me to prefer using an API-only configuration with Ruby on Rails.To create a new application, I use the following command to generate anAPI Application, using Webpacker to with a VueJS front-end.rails new my_application --api --webpack=vue --database=sqlite3 --skip-test --skip-turbolinks --skip-sprocketsAfter I’m done generating the application, I use the following applicationtemplates to setup my app. EditorConfig - Specifies file formatting guidelines for many code editors Rspec - Popular unit testing framework for Ruby FactoryBot - Fixtures replacement for testing StandardRB - Ruby style guide, linter, and formatter. See StandardRB dotenv - A Ruby gem to load environment variables from .env.See dotenv-railsHere are some optional recommendations that depend on the type of applicationyou are developing: Friendly ID - “Swiss Army bulldozer” of slugging and permalink pluginsfor Ruby on Rails. See friendly_id Ahoy - Track visits and events in Ruby, JavaScript, and native apps.Data is stored in your database by default so you can easily combine it withother data. See Ahoy PgHero - Performance Dashboard for PostgreSQL. See PgHero Pundit - Authorization system. See Pundit Vue with InertiaJS - Library supporting conventional interface betweenfront-end tools (React, Vue.js, and Svelte) and back-end (Laravel and Rails).See InertiaJs SendGrid - Configures Ruby support for SendGrid email service API GraphQL - Add a GraphQL API to your Rails app DelayedJob - Background Job processing. See DelayedJob Sidekiq - Background Job processing. See Sidekiq"
} ,
{
"title" : "How to Insert Special Entities in React",
"category" : "",
"tags" : "special entities, es6, react",
"url" : "/2020/12/html-special-entities-in-react/",
"date" : "2020-12-09 16:14:10 -0500",
"content" : "Generating CharactersYou can generate UTF-16 characters in ES6 using String.fromCharCode(),which takes one or more UTF-16 codes as input and returns the unicodecharacter(s) specified.// hex code inputlet copyright = String.fromCodePoint(0x00A9);// "©"// decimal code inputlet copyright = String.fromCodePoint(0169);// "©"Here is a list of the codes for common characters I’ve seen used in websites: Label Character Decimal Hex HTML Copyright © 169 0x000A9 &copy; Registered ® 174 0x000AE &reg; Trademark ™ 8482 0x02122 &trade; Euro € 8364 0x020AC &euro; Dollar $ 36 0x00024 &dollar; Ohm Ω 8486 0x02126 &ohm; Degree ° 176 0x000B0 &deg; Micro µ 181 0x000B5 &micro; Em Dash — 8212 0x201 &mdash; Non Breaking Space 160 0x000A0 &nbsp; If you need a symbol not listed above, see the HTML5 Character Reference,or the List of Unicode Characters.Using with React ComponentsAfter generating the character as a constant, just insert it into your JSX asneeded.import React from 'react';const copyrightMessage = () => { const copyright = String.fromCodePoint(0x00A9); const year = '2020'; const companyName = 'Company Name'; return ( <span> {`${copyright} ${year} ${companyName}`} </span> );};export default copyrightFooter;See also JSX Gotchas for more approaches."
} ,
{
"title" : "A New Solution for Personal and Small Business Websites",
"category" : "",
"tags" : "gatsby, headlesscms, render, github-pages",
"url" : "/2020/12/jamstack-websites/",
"date" : "2020-12-05 11:02:40 -0500",
"content" : "Static Website BeginningsI’ve maintained my own personal blog, and maintained websites for smallbusinesses, since 2000. I started building websites by manually coding the filesin HTML, and creating images I crafted in Photoshop. I’d update the website byuploading the files to the web hosting server using FTP. I later expanded onthis by learning CSS and JavaScript. For the longest time I usedMacromedia Dreamweaver (now owned by Adobe), because it was the closest thingto a web-standard compliant WYSIWYG editor, with a text/code editor thatsupported code coloring. In 2002 I multiplied the power I yielded indeveloping functional websites by learning how to use server-side PHP codingwith the MySQL database. This was my humble beginning as a full-stack developer.For myself this was fine, but for small businesses their relationship withtheir “webmaster” could easily become strained because being available to updatea clients website usually didn’t pay much. Many website maintainers might beunresponsive for days or weeks before updating your website. Even communicatingthe simple text changes you desired were difficult to convey via email to yourwebmaster.My Romance with WordpressI worked for a hosting company from 2004 - 2008 so that I could expand upon myability to create websites by understanding the web hosting servers that hostthe websites. I pursued learning Linux system administration, which includedlearning how to install and configure all related services manually from theLinux command line. The WHM/cPanel system, which made management of a serversimpler via a web-based management interface, had just grown in popularity in2004, and was expanding in use to power armies of website servers in datacenters around the world. The majority of websites hosted from these serverswere powered by PHP/MySQL.During this time many individuals and companies were adopting Content ManagementSystems (CMS) to power their website, because they provided an “administration”area that allowed them to edit the content on their webpages, or even createnew pages or blog posts, without needing to contact a website developer. Thesesystems were PHP/MySQL applications, so working with them was a no brainer.The one CMS that stuck out due to its ease of use was Wordpress. Even thoughWordpress was designed as a blog rather than a CMS, Wordpress can be configuredto support a website by configuring a custom page as the homepage, ratherthan the blog index page it defaults to. Additionally the blog functionality canbe hidden from the public website by removing the blog index page from thewebsites navigation menu altogether. This approach, combined with the hundredsof plugins and themes made for Wordpress made it a perfect platform for anypersonal or small business website. Really it could meet the needs of anywebsite short of complex custom web applications.The cost and hassle of designing and maintaining a website had beeneliminated. Someone is now able to setup and maintain a website themselveswithout coding skills necessary what so ever. Today you can sign-up forWordpress hosting with companies such as BlueHost or HostGator, purchase anddownload a beautiful premium design/theme from a site such as ThemeForest, andthen upload and activate that theme in your Wordpress dashboard. Setting up thepages and content for the website are all accessible to those with averagecomputer skills when using the Wordpress administrative Dashboard.I would tell everyone I knew that if they have a small business, Wordpress isthe best solution for their website, as opposed to limited template basedwebsites provided by platforms like Wix or SquareSpace. Wordpress has much moreflexibility and ability to expand in functionality as your business grows. Youcan use free themes/plugins to begin, upgrade to premium themes/plugins later,and if necessary you can even hire designers or developers to create completelycustom themes or plugins as needed.For a period of time I had my own cPanel web server running from a VirtualPrivate Server (VPS) hosted by Linode. I had created and hosted severalwebsites for small businesses and and friends hosted from this platform.The Pitfalls of WordpressI had tried to support these Wordpress sites by logging into their admin areaand updating the themes, plugins, and the Wordpress software itself when Icould. Everytime I did this I worried that I would unknowingly break thefunctionality somewhere on these websites. Eventually down the line, this isexactly what would happen.Updates to the Wordpress software, which are necessary to avoid security issueswith your website, often cause plugins and themes you are using to stop workingproperly. The developers of Wordpress plugins and themes bare no guarantee thatthey will provide updates so that your website continues to function properly.Premium themes or plugins may offer some additional support and updates in thefuture, but there is still no guarantee of this.I purchased an all-purpose theme builder framework called “Headway” for some ofthe sites I maintained. It was not able to support upgrades or migrations to thenewer major versions of the framework theme they had released, so many siteswere stuck with the version they were using. Unexpectedly, “Headway” had longercontinued to be supported and maintained as of 2016. This was very frustratingfor the websites I had invested so much time into designing using theirframework.Often enough I would bare the responsible for helping to fix a website that hadbecome defaced by hackers, or contain mysterious scripting used to run phishingscams. Sometimes hackers would try to send spam email from my server via ahacked website, thus causing other website users to have issues sending emailfrom my server, because my server had been added to a spam blacklist as a resultof the spam activity.Just a note: Hosting your email from a cPanel service is not worth the hassleeither. I recommend using an email service such as Google Workspace,Fastmail, or ProtonMail.The Promise of JAMstackSo what’s a better solution to setting up and maintaining a person or smallbusiness website? JAMstack!Right now the this approach is only available to companies with budgets largeenough to hire custom website developers to implement, but I forsee thisbecoming more acccessible to non-developers in the future.So what is this solution, and how is it better?The JAMstack recipe calls for using a static website generator to generate awebsite that is powered by modern development tools such as ReactJS andGraphQL, that sources your content from a Headless Content Management System(CMS). Static websites have no code running on the server side, so there is muchlower risk of your website being hacked. The only code is the JavaScript whichruns in the visitors web browser. Because your content is managed separatelyby a Headless CMS, your content can be published not only to your website butalso to other platforms like mobile applications.The full JAMstack solution is made possible by using multiple components thatare provided as separate or integrated services. These include: Headless CMS - Like Wordpress, this enables you to manage the text and imagesthat show up in your website in the form of pages, posts, or other types ofcontent. Custom Website Code - Powered by a static website generator like Gatsby;provides your site design, sources your updated content from the HeadlessCMS, to create your static website presentation. Git repository host - Most people use Github, Gitlab, or BitBucket tohost their website code in a Git repository. Build system - This system is triggered by updates to your website code,or by updates to your content in the Headless CMS. This system might be acontinuous integration / continuous delivery (CI/CD) system, or simplya script setup on your web server. When triggered it builds the updatedversion of your website and deploys it to the public website. Hosting - This can be a simple web server, a cluster of servers, or a contentdelivery network (CDN).Several Headless CMS services currently exist with free tier plans.See JAMstack - Top Headless Content Management SystemsSo all you need to do if you’re a small business with the resources to hire adeveloper is ask them to to integrate a website layout you choose from a premiumwebsite design template market such as ThemeForest into a JAMstack codebasepowered by frameworks such as Gatsby or one of the other site generators,and configure it to automatically regenerate and publish to your static hostingprovider each time you update or add any content to the Headless CMS.Netlify currently provides a CMS, build system, and CDN hosting with a freetier plan to help you get started, with paid plans once your business needsmore. Render also provides an integrated build service and hosting.Hopefully at some point in the future a flexible all-in-one solution will existthat enables a business to create a website as quick and easy as Wordpress, butpowered by a flexible JAMstack infrastructure that you can override with yourown custom front-end once your business needs custom development work.It turns out that designers are starting to publish Gatsby themes already.There may be some themes on ThemeForest that support Gatsby, such as Flexiblogwhich supports the Contentful or Sanity CMS. If you really want to find adesign that supports this approach, I’d recommend checking outJAMstack Themes."
} ,
{
"title" : "VueJS - Built-In & Reserved Tags",
"category" : "",
"tags" : "vue",
"url" : "/2020/03/vuejs-reserved-tags/",
"date" : "2020-03-28 18:42:00 -0400",
"content" : "I’m working on an app that is meant to present a list of “steps” along a certain“path”. I tried to create a component called “path”, but this wasn’t allowed.I got this error in the console.vue.runtime.esm.js:638 [Vue warn]: Do not use built-in or reserved HTML elements as component id: pathIt turns out that HTML and SVG tags are reserved and cannot be used as VueJScomponent names. Built-In Tags slot component Reserved Tags HTML Tags html body base head link meta style title address article aside footer header h1 h2 h3 h4 h5 h6 hgroup nav section div dd dl dt figcaption figure picture hr img li main ol p pre ul a b abbr bdi bdo br cite code data dfn em i kbd mark q rp rt rtc ruby s samp small span strong sub sup time u var wbr area audio map track video embed object param source canvas script noscript del ins caption col colgroup table thead tbody td th tr button datalist fieldset form input label legend meter optgroup option output progress select textarea details dialog menu menuitem summary content element shadow template blockquote iframe tfoot SVG Tags svg animate circle clippath cursor defs desc ellipse filter font-face foreignObject g glyph image line marker mask missing-glyph path pattern polygon polyline rect switch symbol text textpath tspan use view "
} ,
{
"title" : "Updating Vue-Loader to v15 with Webpacker",
"category" : "",
"tags" : "webpacker, rails, vue, vue-loader",
"url" : "/2020/03/vue-loader-webpacker/",
"date" : "2020-03-01 14:37:00 -0500",
"content" : "I recently decided to jump into working with VueJS again within the contextof a project I’m working on that uses Rails 5 with Webpacker.I upgraded Webpacker from v3.6 to v4.2.2, Vue from v2.5.16 to v2.6.11,and Vue-Loader from v14.2.2 to v15.9.0.After making this update I got the following error:ERROR in ./app/javascript/my_pack/components/App.vueModule Error (from ./node_modules/vue-loader/lib/index.js):vue-loader was used without the corresponding plugin. Make sure to includeVueLoaderPlugin in your webpack config.Error: vue-loader was used without the corresponding plugin. Make sure toinclude VueLoaderPlugin in your webpack config. at Object.module.exports(/Users/jason/Projects/my_app/node_modules/vue-loader/lib/index.js:36:29)I was able to identify that I needed to make some configuration changes afterupdatiing Vue-Loader from v14 to v15.I’m a bit rusty on how to navigate the Node/JS eco-system, and the documentationon Webpack doesn’t apply to the Rails method of configuring the app withWebpacker.I recall that after I first setup this Rails app to use Vue with Webpacker thatI had to use the following rake task to configure it to use Vue:bundle exec rails webpacker:install:vueI wasn’t able to resolve this error until I applied these modifications toconfig/webpack/environment.js.I didn’t want to miss any other updates to the boilerplate configuration,nor do I want my own modifications overwritten. So what I recommend you doto upgrade is to make any commits in Git before re-running the above raketask again. Here’s what I got from the output:$ bundle exec rails webpacker:install:vueCopying vue loader to config/webpack/loaders conflict config/webpack/loaders/vue.jsOverwrite /Users/jason/Projects/seed/config/webpack/loaders/vue.js? (enter "h" for help) [Ynaqdhm] Y force config/webpack/loaders/vue.jsAdding vue loader plugin to config/webpack/environment.js insert config/webpack/environment.js insert config/webpack/environment.jsAdding vue loader to config/webpack/environment.js insert config/webpack/environment.js insert config/webpack/environment.jsUpdating webpack paths to include .vue file extensionFile unchanged! The supplied flag value not found! config/webpacker.ymlCopying the example entry file to /Users/jason/Projects/seed/app/javascript/packs create app/javascript/packs/hello_vue.jsCopying Vue app file to /Users/jason/Projects/seed/app/javascript/packs create app/javascript/app.vueInstalling all Vue dependencies run yarn add vue vue-loader vue-template-compiler from "."Expect that it will duplicate certain modifications it makes, but if you usegit diff on the modified files afterwards you can surely detect these and fixthem.$ git diff config/webpack/environment.jsdiff --git a/config/webpack/environment.js b/config/webpack/environment.jsindex 08d2baf..0f3a57d 100644--- a/config/webpack/environment.js+++ b/config/webpack/environment.js@@ -1,8 +1,12 @@ const { environment } = require("@rails/webpacker") const { VueLoaderPlugin } = require("vue-loader") const vue = require("./loaders/vue")+const { VueLoaderPlugin } = require("vue-loader")+const vue = require("./loaders/vue") environment.loaders.append("vue", vue) environment.plugins.prepend("VueLoaderPlugin", new VueLoaderPlugin())+environment.plugins.prepend("VueLoaderPlugin", new VueLoaderPlugin())+environment.loaders.prepend("vue", vue) module.exports = environmentI already manually added ‘.vue’ under ‘extensions’ in the config/webpacker.ymlfile. I deleted app/javascript/app.vue and app/javascript/packs/hello_vue.jsbecause I don’t need them. Now I’m no longer getting the error from./bin/webpack-dev-server."
} ,
{
"title" : "Fixing audio for Steam (Rust) on Mac OS X Mojave",
"category" : "",
"tags" : "rust, steam, mojave",
"url" : "/2019/11/rust-steam-mojave-audio/",
"date" : "2019-11-08 02:15:00 -0500",
"content" : "Recently I bought a new Macbook Pro running Mojave. I found myselfunable to get the microphone to work for the in-game voice chat usingthe ‘v’ key.The closest solution I could find was mentioned in this article: How to Fix Voice Chat in Mac OS MojaveThis article was oriented towards League of Legends, so I had to modify thecommands used to enable this for steam.Disable ProtectionYou still have to reboot the Mac while holding Command + R during start up.In the recovery mode you’ll have to use the menu to run the Terminal, andthen run csrutil disable.After this is completed, reboot the computer.Run Commandssudo sqlite3 ~/Library/Application\ Support/com.apple.TCC/TCC.db "INSERT or REPLACE INTO access VALUES('kTCCServiceMicrophone','com.valvesoftware.steam',0,1,1,NULL,NULL,NULL,'UNUSED',NULL,0,1551892126);"/usr/libexec/PlistBuddy -c "Add NSMicrophoneUsageDescription string" /Applications/Steam.app/Contents/Info.plist/usr/libexec/PlistBuddy -c "Set :NSMicrophoneUsageDescription Using voice chat" /Applications/Steam.app/Contents/Info.plistRe-enable ProtectionReboot into recovery mode again, open the Terminal and runcsrutil enable. Restart once again.After doing this, the Rust game was able to transmit my voice from the mic."
} ,
{
"title" : "Fixing file and directory permissions recursively",
"category" : "",
"tags" : "cmdline, linux",
"url" : "/2019/06/recursive-file-and-directory-chmod/",
"date" : "2019-06-15 01:27:00 -0400",
"content" : "Often I find myself downloading ZIP files, and after unarchiving them all thefiles have totally incorrect permissions such as 777 for all files and folders.Go into the directory main directory at the top of the file/folder hierarchyand run the following commands to resolve this:find . -type d -exec chmod 0755 {} \;find . -type f -exec chmod 0644 {} \;"
} ,
{
"title" : "Amazon Web Services",
"category" : "",
"tags" : "cloud9, aws",
"url" : "/2019/06/amazon-web-services/",
"date" : "2019-06-14 00:37:00 -0400",
"content" : "This article documents my exploration of Amazon Web Services (AWS).HistoryThe first service publically offered by Amazon in 2004 wasAmazing Simple Queue Service (SQS),a distributed message queueing service. Message queuing helps providecommunications between applications/services, with the concern or risk ofrequests (messages) being lost or overburdening the systems processing themessages.Thereafter AWS was relaunched offering Amazon Elastic Compute Cloud (EC2) andAmazon Simple Storage Service (S3).Elastic Compute Cloud (EC2)One of the most popular of the Amazon Web Services isAmazon Elastic Compute Cloud (EC2). I’m more familiar with dedicated orcolocated hosting, where an actual desktop or rack mount computer is setup foryou with some Linux variant installed so that you can SSH into it and configurethe server using your own chosen methods. I’m also familiar with using VirtualPrivate Servers hosted by companies like Linode or DigitalOcean.Much like VPS hosting, EC2 instances are provided by hypervisor technologyrunning on physical machines, thus hosting multiple Virtual Machines (VMs).EC2 uses the [Xen] hypervisor, whereas other VPS services may use theopen source Kernel-based Virtual Machine (KVM) module for Linux hosting,Hyper-V for Windows hosting, or VMware vSphere for cross-platformvirtualization.VPS ComparisonSo how does EC2 differ from typical VPS services?AutomationA VPS provider may have some sort of API to automate the setup and configurationof new virtual machines (VMs), but usually this has some limitations. EC2is intended to be provided as a service that allows for management of EC2instances through a manual interface or via web based APIs.Hardware AbstractionThe hardware and networking details for EC2 instances are completely abstractedfor you. They’re not necessarily confined to a single host machine like a VPSis.This can come with some challenges however when your needs require efficientcommunication between nodes. Amazon Virtual Private CloudResourcesA VPS typically has fixed resources such RAM, bandwidth, disk space, and CPU.EC2 allows you to modify these manually or via scripted automation.EC2 also provides flexibility with instance types that vary in theiroptimization of resources.There are general purpose instances that range in power appropriate for microservices or small web servers, to gaming servers or testing environments. Thereare instances intended for high performance computing. There are instancesintended for intense disk input/output (IO) such as Big Data processing,Database servers, or logging. There are instances intended for optimizedmemory usage such as Memcached, Redis, or database servers. There are instancesoptimized for enhanced networking. You can enable GPU acceleration to be usedwith Amazon Elastic Inference (EI) for machine learning.If you need high disk I/O capacity, which is typical for big data applications,then you can use a service such as Amazon Elastic Block Store (EBS) with yourEC2 instances.BillingYou can setup an EC2 instance for only a couple hours, days, or weeks, and onlybe charged for the resources you used. A typical VPS service is billed monthlyor yearly, so short term use isn’t economically viable.Dedicated HostsSome organizations require dedicated hosting for security compliance, or evenfor software licensing.Container ServicesAmazon Elastic Container Service (ECS) provides support to host applicationsthat have been configured to run in a Docker container. You can choose torun your application from a network of EC2 instances, or use Amazon Fargate tosimplify the setup so that you can focus on developing your application.AWS also supports the open-source container orchestration system known asKubernetes via the Amazon Elastic Container Service for Kubernetes (EKS).Anyone that has used Docker knows that you need a container registry to storeyour container images, so AWS also offersAmazon Elastic Container Registry (ECR).Cloud FormationYou can automate the provisioning of your Cloud9 environment by usingAmazon Cloudformation.See Automating AWS Cloud9Cloud 9[Amazon Cloud9]Uses Amazon Virtual Private Cloud (VPC) to communicate with the EC2 instance.A virtual private cloud (VPC) is a virtual network dedicated to your AWS account.It is logically isolated from other virtual networks in the AWS Cloud.Types of StorageInstance Store vs Elastic Block Store (EBS)Backup MethodsHow to Back Up Amazon EC2 InstancesEC2 Backup Method 2: Creating a New AMIEC2 instances reply on Elastic Block Store (EBS) for their file system. You canconfigure Amazon to automatically create snapshots of your EBS file systemperiodically (daily, weekly, etc).Alternatively you can also create an Amazon Machine Image (AMI)Other Services Amazon Lambda Amazon Elastic Load Balancing Amazon Simple Storage Service (S3) Amazon Elastic File System (EFS) Amazon Elastic Container Service (ECS) - Container orchestration servicethat supports Docker containers Amazon Relational Database Service (RDS) - Provides database services suchas Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database and SQL Server Dockerizing a Ruby on Rails Application"
} ,
{
"title" : "Jumpstart Guide to Ansible",
"category" : "",
"tags" : "ansible, cloud9, aws, software provisioning, configuration management, git",
"url" : "/2019/04/ansible/",
"date" : "2019-04-10 13:00:00 -0400",
"content" : "IntroductionThis guide will help you get started on using Ansible, an open-source toolyou can use to automate and maintain the software and configurations of yourLinux systems, as well as handle custom software deployments.Ansible is programmed in Python, however you do not need to know Python to useit. You may need to gain an understanding of YAML, Jinja templates, and/orread the Ansible Project Documentation.In this guide we use Ansible to maintain the state of your local developmentenvironment. After you are done you’ll be able to apply what you’ve learnedbeyond your local Linux environment.To follow this guide, it is recommended that you setup an Amazon Cloud9environment with the default Amazon Linux EC2 instance. The Cloud9 service isfree, as is the EC2 service to new Amazon AWS users for the first 12 months.If you’ve used Amazon AWS before, then your cost should be less than $10 permonth as long as you are using a t2.micro EC2 instance that is configured toturn off after 30 minutes of non-activity.If you do not have an environment setup yet, see theAmazon Cloud9 Environment Setup guide to get started.This guide also assumes that you have experince with Git. This guide isoriented around the “infrastructure as code” methodology, where all our desiredconfigurations are stored in a Git repository.Establishing a Git repositoryCreate a new Github repository to store your Ansible configuration. Afteryou’ve done this, use git to clone the repository into your local environment.Here’s the command I used with my own repository:git clone git@github.com:redconfetti/cloud9-dev.gitInstalling the Ansible EngineAmazon Linux includes Python 2.7 and 3.5 by default. Installing Ansible usingthe Yum package manager leads to conflicts, so use the Python 2.7 version ofPIP to install Ansible.# Install Ansible using Python 2.7/usr/bin/pip-2.7 install ansibleIf you’re using a different Linux distribution, see theAnsible Installation Guide for other system installation instructions.Ansible ConfigurationThe default Ansible configuration file is located under/etc/ansible/ansible.cfg. Ansible will over-ride these default settings ifthere are any found in a .ansible.cfg file located in your home directory, oran ansible.cfg file located in the current directory.Ansible is designed to be used from a specific machine, designated as the“management node”, that is used to orchestrate the setup and configuration ofother machines. This is why the default configurations exist globallyunder /etc/ansible.For our purposes we want Ansible to use the configuration we place in ourrepository, so create a new ansible.cfg file in the root folder of yourrepository with the following contents[defaults]inventory = hosts.ymlThis will help us to avoid needing to use the -i hosts.yml parameter withour Ansible commands, however it does introduce a possible security risk.Since we’re running an EC2 instance that we’re not sharing with other users,we’re not running the risk of another user modifying our ansible.cfg file.Ansible Configuration Documentation: Avoiding security risks with ansible.cfg in the current directory Configuring Ansible Ansible Configuration Settings ansible-configAnsible InventoryAnsible is able to manage the state of multiple machines. By default, theinventory is defined in /etc/ansible/hosts using a format similar to anINI file. Alternatively you can use a YAML file to define your inventory.As you saw above, we configured Ansible to recognize the hosts.yml filelocated in the root directory of the repository.Create the hosts.yml file with the following contents that defines the localmachine as ‘local’.all: hosts: local: ansible_host: localhost ansible_connection: localThis configures a host named ‘local’ that represents the local EC2 instancethat you’re working from when you use the command line interface (Terminal)in Cloud9.For more information, see Ansible - Working with InventoryAnsible Ad-Hoc CommandsAnsible is designed to connect to hosts configured in your inventory file usingSSH key authentication. Because we’re configured to connect to our localsystem, we do not have to worry about SSH connections or configuration.Ansible can be used to perform checks against hosts using a single commandline command, known as an ‘ad-hoc’ command.Run the command ansible all -m ping and you should see the following.$ ansible all -m pinglocal | SUCCESS => { "changed": false, "ping": "pong"}If you see the output above indicating success, then you’re ready to move ontothe next step.For more information, see Introduction To Ad-Hoc Commands.Ansible PlaybooksAd-hoc commands are useful, but playbooks are where we can define multipletasks that maintain the state of your system(s).Much like the hosts.yml file defined above, playbooks are defined in YAMLformat. YAML is not a programming language, but a format for storinginformation. In this case the information is our configuration that Ansiblemodules are defined to interpret and apply to the hosts we specify.A playbook can contain one or more “plays” that applies a certain state to agroup of hosts.The Amazon Linux distribution uses a package manager called Yum (YellowdogUpdater Modified). Ansible provides a Yum module which allows us to use thispackage manager to install and update software packages on the host.Create a file named local.yml with the following contents in your rootdirectory.---# This playbook deploys the entire setup to the Cloud9 development environment.- hosts: local tasks: - name: upgrade all packages become: yes yum: name: '*' state: latestNext, run the following command to run this playbook.ansible-playbook local.ymlYou should see output similar to this:PLAY [local] *******************************************************************TASK [Gathering Facts] *********************************************************ok: [local]TASK [upgrade all packages] ****************************************************ok: [local]PLAY RECAP *********************************************************************localhost : ok=2 changed=0 unreachable=0 failed=0Now all of the software packages installed within your EC2 instance runningAmazon Linux are updated.See Intro to Playbooks or Modules by Category.Ansible ModulesWe just covered a task that used the Yum module to update all the installedsoftware packages on the server.Ansible supports many modules that serve many different purposes. You can viewa list of all available modules by running:ansible-doc -lYou can read the documentation for a specific module by using the command:ansible-doc <module-name># View 'hostname' module documentationansible-doc hostnameYou can also browse modules online starting with Modules by Category.Ansible TemplatesMost Unix/Linux utilities and daemons can be configured using plain text files.Ansible supports the ability to define configuration file templates that use theJinja2 template syntax.Let’s create a file called gitconfig.j2 to define our Git clientconfiguration.# {{ ansible_managed }}[core] editor = /usr/bin/nano[user] name = {{git_client_name}} email = {{git_client_name}}Now update the local.yml playbook file so that it includes the ‘vars’ thatdefine our git client name and email address. Also add the tasks to output thevalue of the ansible_user_id variable, and configure the .gitconfig in thatusers home directory.---# This playbook deploys the entire setup to the Cloud9 development environment.- hosts: local vars: git_client_name: PeeWee Herman git_client_email: peewee@example.com tasks: - name: upgrade all packages yum: name: '*' state: latest - debug: var=ansible_user_id - name: configure git client template: src: gitconfig dest: "/home/{{ ansible_user_id }}/.gitconfig" owner: "{{ ansible_user_id }}" group: "{{ ansible_user_id }}" mode: 0644As you can see, the ‘debug’ task allows us to see what value is registeredfor the ansible_user_id variable. This is useful for troubleshootingyour task configurations when they are failing.$ ansible-playbook local.ymlPLAY [local] *******************************************************************TASK [Gathering Facts] *********************************************************ok: [local]TASK [upgrade all packages] ****************************************************ok: [local]TASK [debug] *******************************************************************ok: [local] => { "ansible_user_id": "ec2-user"}TASK [configure git client] ****************************************************changed: [local]PLAY RECAP *********************************************************************localhost : ok=4 changed=1 unreachable=0 failed=0Now check the state of the Git configuration file.$ cat ~/.gitconfig# Ansible managed[core] editor = /usr/bin/nano[user] name = PeeWee Herman email = peewee@example.comThe ‘Ansible managed’ comment at the top is simply a string that can beconfigured in the ansible.cfg to inform users that the configuration filebeing viewed is configured by Ansible.Ansible managed is the default string for this variable. You can redefinethis string to include the user-id of the Ansible user as well as the date andtime, although this will result in Ansible reporting that the conigurationfile has been changed everytime you run the task.See ansible_managedansible.cfg[defaults]inventory = hosts.ymlansible_managed = Ansible managed, do not edit directly: last update by {uid} on %Y-%m-%d, %H:%M:%SLet’s run the Playbook one more time.$ ansible-playbook local.ymlPLAY [local] *******************************************************************TASK [Gathering Facts] *********************************************************ok: [local]TASK [upgrade all packages] ****************************************************ok: [local]TASK [debug] *******************************************************************ok: [local] => { "ansible_user_id": "ec2-user"}TASK [configure git client] ****************************************************changed: [local]PLAY RECAP *********************************************************************localhost : ok=4 changed=1 unreachable=0 failed=0 As you can see, it reports that it changed the configuration file. If we checkthe configuration file, you’ll see that it now includes the message with user,date, and time.$ cat ~/.gitconfig# Ansible managed, do not edit directly: last update by ec2-user on 2019-03-25, 06:28:58[core] editor = /usr/bin/nano[user] name = PeeWee Herman email = peewee@example.comAnsible RolesIf you placed all your tasks in a playbook, it could get very large and wouldn’tbe as easy to manage. This is why Ansible supports grouping your automationcomponents (variables, templates, tasks, etc) into re-usable “roles”.It’s often that a role named ‘common’ is created to store settings that applyacross all your hosts, such as updating the system software using Yum,setting the timezone, etc. Other roles are defined separate of the ‘common’ rolethat apply to certain nodes of your cluster or network, such as ‘webserver’ or‘loadbalancer’. You can even get more detailed with roles named after thedaemons you want running, such as ‘apache’ or ‘nginx’. It’s all up to you.Taken from Ansible Docs - Roles Roles expect files to be in certain directory names. Roles must include atleast one of these directories, however it is perfectly fine to exclude anywhich are not being used. When in use, each directory must contain amain.yml file, which contains the relevant content: tasks - contains the main list of tasks to be executed by the role. handlers - contains handlers, which may be used by this role or even anywhere outside this role. files - contains files which can be deployed via this role. templates - contains templates which can be deployed via this role. vars - other variables for the role (see Ansible Docs - Using Variables for more information). defaults - default variables for the role (see Ansible Docs - Using Variables for more information). meta - defines some meta data for this role. See below for more details. Let’s move our tasks to a new role called ‘development’.To get started, establish a directory for your roles to contain thedevelopment role, containing a tasks and templates folder. Within taskscreate a new main.yml file, and move the Git configuration template into thetemplates folder.It should look like this when you’re done: roles development tasks main.yml templates gitconfig.j2 mkdir -p roles/development/tasksmkdir -p roles/development/templatestouch roles/development/tasks/main.ymlmv gitconfig.j2 roles/development/templatesNext move your tasks from the local.yml to roles/development/tasks/main.yml.Additionally, update the path for the gitconfig template so that it isreflected as templates/gitconfig (you don’t need to include the .j2 fileextension).---- name: upgrade all packages become: yes yum: name: '*' state: latest- debug: var=ansible_user_id- name: configure git client template: src: templates/gitconfig dest: "/home/{{ ansible_user_id }}/.gitconfig" owner: "{{ ansible_user_id }}" group: "{{ ansible_user_id }}" mode: 0644Now update local.yml so that it no longer defines the tasks for the play, butinstead points to the development role.---# This playbook deploys the entire setup to the Cloud9 development environment.- hosts: local vars: git_client_name: PeeWee Herman git_client_email: peewee@example.com roles: - developmentIf you run ansible-playbook local.yml once again, the output should be thesame as before with no errors.VariablesThe Ansible engine makes it possible to re-use a role that you have definedon different hosts with differing configurations. This is made possible throughthe use of variables. Variables can be defined in your hosts file per each host.Let’s move our Git name and email address from our local.ymlplaybook, and place it within hosts.yml so that it applies to our localhost.Here we’ve defined a ‘development’ group, and have placed our ‘local’ hostunderneath that group. We’ve also defined the variables that should apply toall hosts within that ‘development’ group.all: children: development: hosts: local: ansible_host: localhost ansible_connection: local vars: git_client_name: PeeWee Herman git_client_email: peewee@example.comIf you want to inspect your inventory for errors, you can use theansible-inventory command to inspect how Ansible is interpretting yourconfiguration.$ ansible-inventory --list{ "_meta": { "hostvars": { "local": { "ansible_connection": "local", "ansible_host": "localhost", "git_client_email": "peewee@example.com", "git_client_name": "PeeWee Herman", "repository": "https://github.com/redconfetti/redconfetti.github.io", } } }, "all": { "children": [ "development", "ungrouped" ] }, "development": { "hosts": [ "local" ] }, "ungrouped": {}}$ ansible-inventory --graph@all: |--@development: | |--local |--@ungrouped:The preferred way to store variables isn’t to define them in the hosts.ymlfile though. Instead you can create a host_vars directory, and then namethe files within it after each host. You can also create a group_varsdirectory, and then name the files within it after each group.Let’s create a host variable file for our ‘local’ host.mkdir host_varstouch host_vars/local.ymlNext let’s move our variables from the hosts.yml into ourhost_vars/local.yml file.host_vars/local.ymlgit_client_name: PeeWee Hermangit_client_email: peewee@example.comThis should result in our hosts file being slim again.hosts.ymlall: children: development: hosts: local: ansible_host: localhost ansible_connection: localFactsIn addition to the variables that you define, you can also use a feature ofAnsible known as “facts”. Facts are variables created by the Ansible engine thatcontain information about the host you are logging into… what user Ansible isacting as on the system, the command line environment, networking, etc.To get a list of all the variables Ansible collects about our system runthe command: ansible local -m setup.DefaultsRoles are intended to be re-usable, but it could be tedious having to definevariables for every single value that a task or template might need defined.This is why roles support defaults defined in defaults/main.yml.To make our role more flexible, let’s define defaults for the name and emailaddress used by the Git client.mkdir -p roles/development/defaultstouch roles/development/defaults/main.ymlAfter you’ve done this, add the following to defaults/main.yml.git_client_name: "{{ ansible_user_gecos }}"git_client_email: "{{ ansible_user_id }}@{{ ansible_fqdn }}"To properly test this, go into host_vars/local.yml and comment out thegit_client_name and git_client_email variables.# git_client_name: PeeWee Herman# git_client_email: peewee@example.comRun the playbook once more and see what happens.$ ansible-playbook local.yml$ cat ~/.gitconfig# Ansible managed, do not edit directly: last update by ec2-user on 2019-03-25, 19:42:28[core] editor = /usr/bin/nano[user] name = EC2 Default User email = ec2-user@ip-172-31-18-245.us-west-2.compute.internalAs you can see Ansible provided our defaults in the configuration file. Nowour role is more reusable.For more information, see Ansible Docs - Using Variables.Defining Variables in FilesIt’s worth mentioning that you can also define variables in external files.You more than likely have some passwords and API keys that you surely don’twant to check into your repository (even if it’s private).To work around this, you can configure your playbook like the following.---# This playbook deploys the entire setup to the Cloud9 development environment.- hosts: local roles: - development vars_files: - ~/.ansible_secrets.yml tasks: - debug: var=secret_password - debug: var=some_api_keyNext create a ~/.ansible_secrets.yml file with the following contents:---secret_password: abcdef123456some_api_key: A65B90148F091E5F3F7E4DFFECD1B074When you run your playbook, it should reflect the secrets you configured.$ ansible-playbook local.ymlPLAY [local] *******************************************************************TASK [Gathering Facts] *********************************************************ok: [local]TASK [debug] *******************************************************************ok: [local] => { "secret_password": "abcdef123456"}TASK [debug] *******************************************************************ok: [local] => { "some_api_key": "A65B90148F091E5F3F7E4DFFECD1B074"}PLAY RECAP *********************************************************************local : ok=3 changed=0 unreachable=0 failed=0Now you can define variables in a file that you want to keep secret, but notcheck them into your repository. At most you might want to check in a filenamed ansible_secrets.example.yml that contains bogus passwords and keys,and can be copied to ~/.ansible_secrets.yml and then modified if neededin the future.A good practice is to add instructions to your README.md file, like so:# SetupRun the following command to copy the template to your home directory```shellcp ansible_secrets.example.yml ~/.ansible_secrets.yml```Next, modify `~/.ansible_secrets.yml` to reflect the proper API keysand passwords.For more information, see Ansible Docs - Defining Variables In Files.Ansible VaultInstead of defining your variables in a file that you keep separate of yourrepository, there is an option to store your secrets in an encrypted filewithin your repository.Ansible Vault provides support for many commands related to working withencrypted variable files.You can run your playbook with a flag to let ansible-playbook know that itneeds to decrypt one of the variable files.ansible-playbook --ask-vault-pass local.ymlYou can also place your Ansible Vault password in a file and tell Ansible touse that password file.echo 'my$ecretpa$$' > ~/.vault_passansible-playbook --vault-password-file ~/.vault_pass local.ymlYou can avoid having to provide this flag altogether by defining the path toyour vault password file in an environment variable (defined in~/.bash_profile, or ~/.bashrc).export ANSIBLE_VAULT_PASSWORD_FILE=~/.vault_passOr if you want to make this a setting associated with your repository, you cansimply add a ‘[defaults]’ section to your ansible.cfg that defines the pathto the vault password file.[defaults]. . .vault_password_file = ~/.vault_passKeep in mind that recommend doing this only if you have a private repositorythat is not available to the public, but only to your development team. Thismakes it possible for all the secrets to be available to anyone on the team yougive the password to.Ansible GalaxyWouldn’t it be great if you could re-use roles that others have published?This is possible! You can install and use roles from a public repository knownas Ansible Galaxy. It’s also possible to obtain roles packaged on the web, orpublished in Git repositories (Github, Gitlab, or Bitbucket).To do this, create a requirements.yml file in the root of your Ansiblerepository to store the roles that your configuration requires.For our example we’ll use a role that sets the timezone# Install timezone role- src: yatesr.timezoneTo install the required roles, run ansible-galaxy install -r requirements.yml.$ ansible-galaxy install -r requirements.yml- downloading role 'timezone', owned by yatesr- downloading role from https://github.com/yatesr/ansible-timezone/archive/1.1.0.tar.gz- extracting yatesr.timezone to /home/ec2-user/.ansible/roles/yatesr.timezone- yatesr.timezone (1.1.0) was installed successfullyNow that you’ve installed the timezone role, we’ll need to configure thetimezone we want applied to our environment.Simply add the timezone setting to your host_vars/local.yml like below. We’reusing America/Los_Angeles to configure the system to use the Pacific DaylightTime (PDT) time zone.For other time zone values, see List of TZ Database Time Zones - List.Also feel free to uncomment your Git name and email address so that it applieslike it did before.git_client_name: PeeWee Hermangit_client_email: peewee@example.comtimezone: America/Los_AngelesAnd of course we have to add the role to our playbook.---# This playbook deploys the entire setup to the Cloud9 development environment.- hosts: local roles: - development - yatesr.timezone vars_files: - ~/.ansible_secrets.ymlLet’s run a command to output our date/time, so we can compare it to what wesee after reconfiguring our time zone.$ dateFri Mar 29 03:09:26 UTC 2019Now let’s run our playbook again.$ ansible-playbook local.ymlPLAY [local] *******************************************************************TASK [Gathering Facts] *********************************************************ok: [local]TASK [development : upgrade all packages] **************************************ok: [local]TASK [development : debug] *****************************************************ok: [local] => { "ansible_user_id": "ec2-user"}TASK [development : configure git client] **************************************ok: [local]TASK [yatesr.timezone : include_vars] ******************************************ok: [local] => (item=/home/ec2-user/.ansible/roles/yatesr.timezone/vars/../vars/RedHat.yml)TASK [yatesr.timezone : Install tzdata for Debian based distros] ***************skipping: [local]TASK [yatesr.timezone : Install tzdata for RedHat based distros] ***************ok: [local]TASK [yatesr.timezone : Install tzdata for Archlinux based distros] ************skipping: [local]TASK [yatesr.timezone : Set timezone config] ***********************************ok: [local]TASK [yatesr.timezone : Set link to localtime] *********************************ok: [local]PLAY RECAP *********************************************************************local : ok=8 changed=0 unreachable=0 failed=0 Now when we ask the system to output the date, we get the Pacific date/time.$ dateMon Mar 25 21:17:18 PDT 2019See Ansible Galaxy Docs for more information.ConclusionI hope this guide has been useful in jumpstarting your interest in usingAnsible. I surely tried to mention many of the things I needed to get startedand start using Ansible.I hope you’re more comfortable with Ansible and find yourself navigating theofficialdocumentation with more ease now that you understand what it does andhow to use it.Feel free to refer to my cloud9-dev repository to get other ideas on howyou can setup your local development environment in Cloud9 -redconfetti/cloud9-dev."
} ,
{
"title" : "Sidekiq with Cloud66",
"category" : "",
"tags" : "hosting, cloud66, sidekiq",
"url" : "/2018/04/cloud66-sidekiq/",
"date" : "2018-04-24 23:08:00 -0400",
"content" : "I had to configure a Rails application using Sidekiqas the background job processor with Cloud66 recently.We’re currently only using Cloud66 with our staging server.The following configuration in a Procfile in the rootof our repository with the following configuration workedfine.worker: bundle exec sidekiq -e $RAILS_ENV -C config/sidekiq.yml -i {{UNIQUE_INT}}This was derived from the instructions in theHow to run Background processes guide.Cloud66 manages the worker processes cnofigured in this manner using theBluePill gem.You can log into your server using the CX toolbelt and use the followingcommands to inspect the status of the worker processes.# check status of running processessudo bluepill status# view log for workersudo bluepill log user_worker_1Worker configuration in the form of .pill files are located on the server within/etc/bluepill/autoload. Worker logs are stored in the logs folder for the application(e.g. /var/deploy/heroiq/web_head/current/log/user_worker_1.log)."
} ,
{
"title" : "SSH issues with Mac OS X High Sierra",
"category" : "",
"tags" : "high sierra, ssh",
"url" : "/2017/12/high-sierra-ssh-issue/",
"date" : "2017-12-12 13:08:00 -0500",
"content" : "A coworker of mine was reporting an issue with SSH after updating to Mac OS XHigh Sierra.$ ssh server-alias-hostnameUnable to negotiate with 192.168.1.5 port 22: no matching cipher found. Their offer: blowfish-cbc,aes256-cbcYou can view a list of supported ciphers by running ssh -Q cipher.It turns out that the system is configured to use certain ciphers within/etc/ssh/ssh_config. You can adjust your local configuration within~/.ssh/config to make sure that the ciphers supported by your local clientmatch one of the ones offered by the remote server.```SSH Config~/.ssh/configHost * SendEnv LANG LC_* Ciphers +aes256-cbc```"
} ,
{
"title" : "Markdown Links and 80 Character Line Length",
"category" : "",
"tags" : "markdown",
"url" : "/2017/11/markdown-links-80-character-line-length/",
"date" : "2017-11-25 09:55:00 -0500",
"content" : "I’ve long been a fan of using Markdown for documentation in projectshosted on Github. In October of 2014 I decided to migrate from a Wordpressblog to Github Pages, which is powered by limited Jekyllfunctionality on the Github server side.With this migration I converted all my articles from HTML toGithub Flavored Markdown (GFM), which resulted in much better support forformatting my code examples, tables, strikethrough text formatting, andemojii.I’ve made use of the following cheat sheets for Markdown syntax: Github Markdown Cheatsheet Daring Fireball - Markdown: Syntax Adam-P Markdown CheatsheetAn issue I’ve had is trying to limit the line length of my markdown. Linebreaks do not break apart paragraphs when rendered, however you cannot add linebreaks into the URL of the link without breaking the link.Alternative SyntaxTypically link syntax is provided like so:[Google](http://www.google.com/)However if the link and link text exceeds 80 characters, then you end up withhard to read Markdown.The solution is to use a syntax not commonly mentioned in Markdown guides forlinking that separates the text content from the URL.The links may still wrap at the bottom of the document, but at least thecontent is easy to read in plain text, even on the command line.My Links:* [Github Flavored Markdown][1]* [Basic Writing and Formatting Syntax - Using Emoji][2][1]: https://guides.github.com/features/mastering-markdown/#GitHub-flavored-markdown[2]: https://help.github.com/articles/basic-writing-and-formatting-syntax/#using-emojiOne problem I’ve run into is that the numbered references end up providedthroughout the page out of order, which bothers me in some obsessive-compulsivesort of way.Instead you can use the link text itself.Check out [Spotify] for cool music[Spotify]: https://www.spotify.com/Alternatively, you can also use a case insensitive text key if the link text istoo informal for you.[Click here for Google][Google Link][google link]: http://www.google.com/ImagesA similar alternative for images can also be used:![Beautiful flower photo][flower photo][flower photo]: /images/flower.png"
} ,
{
"title" : "Fitter Happier",
"category" : "",
"tags" : "radiohead, voice synthesis",
"url" : "/2017/08/fitter-happier/",
"date" : "2017-08-07 15:00:00 -0400",
"content" : "say -v fred "Fitter"say -v fred "happier"say -v fred "More productive"say -v fred "Comfortable"say -v fred "Not drinking too much"say -v fred "Regular exercise at the gym"say -v fred "three days a week"say -v fred "Getting on better with your associate employee contemporaries"say -v fred "At ease"say -v fred "Eating well"say -v fred "no more microwave dinners and saturated fats"say -v fred "A patient, better driver"say -v fred "A safer car"say -v fred "baby smiling in back seat"say -v fred "Sleeping well, no bad dreams"say -v fred "No paranoia"say -v fred "Careful to all animals, never washing spiders down the plughole"say -v fred "Keep in contact with old friends, enjoy a drink now and then"say -v fred "Will frequently check credit at moral bank, hole in wall"say -v fred "favours for favours"say -v fred "fond but not in love"say -v fred "Charity standing orders"say -v fred "on sundays ring-road supermarket"say -v fred "No killing moths or putting boiling water on the ants"say -v fred "Car wash, also on sundays"say -v fred "No longer afraid of the dark or midday shadows"say -v fred "nothing so ridiculously teenage and desperate"say -v fred "Nothing so childish"say -v fred "At a better pace"say -v fred "slower and more calculated"say -v fred "No chance of escape"say -v fred "Now self-employed"say -v fred "Concerned, but powerless"say -v fred "An empowered and informed member of society, pragmatism not idealism"say -v fred "Will not cry in public"say -v fred "Less chance of illness"say -v fred "Tires that grip in the wet, shot of baby strapped in backseat"say -v fred "A good memory"say -v fred "Still cries at a good film"say -v fred "Still kisses with saliva"say -v fred "No longer empty and frantic"say -v fred "Like a cat"say -v fred "Tied to a stick"say -v fred "That's driven into"say -v fred "Frozen winter shit, the ability to laugh at weakness"say -v fred "Calm"say -v fred "fitter"say -v fred "healthier"say -v fred "and more productive"say -v fred "A pig"say -v fred "in a cage, on antibiotics""
} ,
{
"title" : "FileMerge (also known as opendiff)",
"category" : "",
"tags" : "filemerge, opendiff, xcode",
"url" : "/2017/08/opendiff/",
"date" : "2017-08-07 15:00:00 -0400",
"content" : "Recently a developer colleague of mine was asking about diff tools. We let himknow that he can use Homebrew to install a ported versionof the ‘diff’ tool provided byGNU utils.An expensive alternative is Kaleidoscope app, which looks great but might beoverkill for our purposes.The good news is that XCode provides a GUI tool called FileMerge, which is alsoknown as opendiff from the command line. You do have to agree to the Xcode/iOSlicense, which requires local admin privileges, to use this tool.$ which opendiff/usr/bin/opendiffIt appears to provide a very intuitive GUI representation of the differences.You can also configure it to be the default merge tool with Git.git config --global merge.tool opendiffIf you want to launch FileMerge from the Launcher, you’ll need to open theApplications folder, right-click on XCode, choose to ‘Show Package Contents’,then navigate to Contents/Applications. Inside you’ll see the FileMergeapplication.If you right-click on FileMerge, you can right-click, choose to ‘Make Alias’,then move the alias to your Applications folder."
} ,
{
"title" : "Running a Bitcoin Core Full Node",
"category" : "",
"tags" : "ubuntu, bitcoin",
"url" : "/2017/05/running-a-bitcoin-core-full-node/",
"date" : "2017-05-26 00:08:00 -0400",
"content" : "There has been a lot of hype concerning crypto currencies like Bitcoin andEthereum recently. I even had some of my own minor gains through an account Ihave with Coinbase.com.I haven’t been much into the zeitgeist of Bitcoin investment, or even thepossibilities of blockchain methods used for real-world applications other thancurrency, until now.You will need a server that has at least 150 GB available, and as the size ofthe blockchain increases this will rise. I configured my node to use theauto-pruning feature, but it still is using 128 GB currently.$ du -sh .bitcoin/128G .bitcoin/Install the software-properties-common Packagesudo apt-get install software-properties-commonAdd the Bitcoin Personal Package Archive (PPA)sudo apt-add-repository ppa:bitcoin/bitcoin Stable Channel of bitcoin-qt and bitcoind for Ubuntu, and their dependenciesNote that you should prefer to use the official binaries, where possible, tolimit trust in Launchpad/the PPA owner.No longer supports precise, due to its ancient gcc and Boost versions. More info: https://launchpad.net/~bitcoin/+archive/ubuntu/bitcoinPress [ENTER] to continue or ctrl-c to cancel adding itIgnore the message about ‘precise’ no longer being supported, and press ENTERto continue. It’s referring to Ubuntu 12.04.5 LTS (Precise Pangolin). You cancheck your own version of Ubuntu by running cat /etc/lsb-release.After this completes you should update all the packages.sudo apt-get updateInstall Bitcoin Core daemon (bitcoind)sudo apt-get install bitcoindConfiguring Bitcoin DaemonA problem that you will likely run into is where the daemon uses up all the diskspace on your server. I recommend creating a bitcoin.conf configuration file.It’s best to set the minimum value for pruning, and also set the db cache sizeto be an appropriate amount of RAM in measured in megabytes.# Enable pruning to reduce storage requirements by deleting old blocks.# This mode is incompatible with -txindex and -rescan.# 0 = default (no pruning).# 1 = allows manual pruning via RPC.# >=550 = target to stay under in MiB.prune=550# Set database cache size in megabytes (4 to 16384, default: 300)dbcache=1000Run Bitcoin Core DaemonExit to an unprivileged user account, and then run bitcoind -daemonbitcoind -daemon -conf=bitcoin.confBitcoin server startingAfter this point you can run various commands to interact with the daemon.# Get Network Infobitcoin-cli getnetworkinfo# Get Blockchain Infobitcoin-cli getblockchaininfo# Get Wallet Infobitcoin-cli getwalletinfo# Stop the Nodebitcoin-cli stopStarting Bitcoin DaemonIt’s cumbersome to type the command above each time you need to re-start thedaemon. By default Ubuntu configures your $PATH so that it includes ~/bin,should that directory exist. Do the following to prepare some scripts for easyexecution.mkdir ~/bintouch ~/bin/btcstarttouch ~/bin/btclogecho -e '#!/usr/bin/env bash\nbitcoind -daemon -conf=~/bitcoin.conf' > ~/bin/btcstartecho -e '#!/usr/bin/env bash\ntail -F ~/.bitcoin/debug.log' > ~/bin/btclogYou’ll have to logout and log in again, but now you will be able to start yourdaemon using btcstart command.$ btcstartBitcoin server starting$ btclog2017-05-26 15:59:19 Opened LevelDB successfully2017-05-26 15:59:19 Using obfuscation key for /home/johnsmith/.bitcoin/blocks/index: 0000000000000000......You’ll have to use CTRL+C to exit out of the logs."
} ,
{
"title" : "Configuring a New Ubuntu Server with Sudo",
"category" : "",
"tags" : "ubuntu, sudo, sshd, security",
"url" : "/2017/05/configuring-new-ubuntu-server-with-sudo/",
"date" : "2017-05-26 00:08:00 -0400",
"content" : "Here are my notes for configuring a new Ubuntu server with a single user withsudo rights, with the ‘root’ user login disabled in the SSHd configuration.This guide assumes that you have just created a server from the web interface ofa service like Linode or Digital Ocean, and you know the root password.Local ConfigurationFrom your local machine, you can configure your SSH client within~/.ssh/config. Use the following configuration to connect to the server usinga specific username and SSH key.```SSH ConfigHost myserver Hostname 192.168.1.2 Port 22 User johnsmith IdentityFile ~/.ssh/id_rsaThis configuration makes it possible to connect to the server quickly using`ssh myserver`.## Create User AccountBecause the account doesn't exist on the server yet, you'll need to login asroot with the root password or with the same type of SSH key configuration.Once you're in the server as 'root', create the user account using:``` shelladduser johnsmithAdd User to Sudo GroupUbuntu has a ‘sudo’ group already setup. Simply use usermod to add your newuser account to that group.usermod -aG sudo johnsmithConfigure SSH KeyYou’ll need to login to the new account using the su command.su - johnsmithNext create a ~/.ssh folder with authorized_keys file, and place your publicSSH key (likely located within ~/.ssh/id_rsa.pub on your local machine) withinthis authorized_keys file.mkdir -p ~/.ssh/touch ~/.ssh/authorized_keysnano ~/.ssh/authorized_keysAfter you’ve configured your new account for SSH key authentication, use exitto get back to ‘root’.Configure SSHd to Disallow Root User LoginWithin /etc/ssh/sshd_config, find the PermitRootLogin setting and set itfrom ‘yes’ to ‘no’. Do the same for the ‘PasswordAuthentication’ setting.```ssh configPermitRootLogin noPasswordAuthentication noAlso add an entry that ensures that only your sudo user can login, for goodmeasure. This keeps any other accounts that might exist from being used.```shellAllowUsers johnsmithFor good measure you could also change the port used by SSHd. This would requirethat you set a different port number in your SSH config file (~/.ssh/config)with the port specified.Port 6221After you’ve saved the changes to the configuration file, restart the SSHd server./etc/init.d/ssh restartNow logout and hope that you can login as your user. Perhaps you should staylogged in and open a new terminal tab and test it out before you logout as‘root’.Becoming RootNow you can log into your box as your normal user account, then become rootusing:sudo su rootYou’ll just have to type in your own password once. This means that hackers willneed to know your username, SSH private key, and password, to gain full accessto your box."
} ,
{
"title" : "Detecting if WebMock is enabled for Net::HTTP",
"category" : "",
"tags" : "WebMock, HTTParty",
"url" : "/2017/05/detecting-webmock-enabled-net-http/",
"date" : "2017-05-17 17:43:52 -0400",
"content" : "I ran into an issue where we were mocking HTTP responses 400+ in our Rspectests, which resulted in our application logging an error and a stack trace.When we expect errors because we’re using WebMock to emulate an HTTP 500response, logging the stack trace involved can be too verbose.Sometimes we might need the stack trace, such as when a developer is debuggingcode involving the handling of error responses. I discussed this with otherdevelopers they expressed that they don’t want to introduce a globalconfiguration flag to turn the stack trace logging on or off.The ideal solution was to simply not log the stack trace when WebMock is beingused in the ‘test’ environment.We’re using HTTParty, which uses Net::HTTP. I did some investigating anddiscovered that when WebMock is enabled (via WebMock.enable!), that itreplaces the HTTP module with it’s own. There isn’t anything clear to indicateif WebMock is enabled or not, however I noticed thatNet::HTTP.socket_type is redefined as StubSocket when WebMock is enabled.> Net::HTTP.socket_type=> Net::BufferedIO> WebMock.enable!=> {:net_http=>WebMock::HttpLibAdapters::NetHttpAdapter}> Net::HTTP.socket_type=> StubSocket> WebMock.disable!> Net::HTTP.socket_type=> Net::BufferedIOI don’t see the equivalent modification in other adapters, such as Curb.I’m going to open a Github issue requesting support such as this. Perhaps asimple unified call to set a class variable in WebMock class itself, with acorresponding WebMock.is_enabled? method. I’ve made an issue to requestthis. I will make a pull request soon."
} ,
{
"title" : "Static Hosting with Neocities",
"category" : "",
"tags" : "static, neocities, jekyll",
"url" : "/2017/05/static-hosting-with-neocities/",
"date" : "2017-05-12 12:43:51 -0400",
"content" : "I’ve been using a Wordpress site for my blog for years, but that has becomecumbersome, especially when you have to deal with your website being exploiteddue to holes in one of the many plugins that your site is relying on.I used to focus on LAMP stack development, and so running my owncPanel/WHM server was a no brainer. I more recently migrated my tech blog,Ruby Colored Glasses, from Wordpress to Github Pages. This is nice because thesite is hosted for free by Github, however that’s limited to one site per eachaccount.So I’ve decided to try to find another cheap low-cost static website solutionthat works with Jekyll.NeocitiesBack in the 90’s there used to be a free hosting solution known as GeoCitiesthat hosted many awesome websites for many people. This is where many peoplewere able to express themselves in their own unique ways, while alsolearning HTML.Neocities hopes to provide the same type of community. For $5 a month you canbecome a Supporter, which earns you theability to create as many sites as you wish, use a custom domain with each site,and also use WebDAV to manage files. If you want the same for a lifetime, youcan send them $100 via Bitcoin.So my goal at the current moment is to explore if it’s possible to generate awebsite via Jekyll, and then upload it to one of the Neocities sites.ImportOne of the first steps for me is to import my Wordpress site. There is aplugin that one can use to import all of their Wordpress content into a Jekyllsite, however it requires that you have direct MySQL access to your server.Note: The importer only converts your posts and creates YAML front-matter. Itdoes not import any layouts, styling, or external files (images, CSS, etc.).Configure Server for Public AccessI had to go into my server and configure MySQLd to bind to more than just thelocal host address. This required that I edit/etc/mysql/mysql.conf.d/mysqld.cnf on the Ubuntu machine I’m currently hostingthe site from and change the IP to 0.0.0.0.# Instead of skip-networking the default is now to listen only on# localhost which is more compatible and is not less secure.# bind-address = 127.0.0.1bind-address = 0.0.0.0I was then able to connect directly to the server and authenticate. Luckily Ididn’t have any sort of firewall blocking the ports. I tested the connectionusing this command:mysql --host=123.321.123.5 --user=my_user --password=mySecr3tPaSS my_database_nameInstall Gemsgem install jekyll-import unidecode sequel mysql2 htmlentitiesPerform ImportI’m choosing to import all the files to md (Markdown) file extensions insteadof ‘html’. This command worked just fine for me.ruby -rubygems -e 'require "jekyll-import"; JekyllImport::Importers::WordPress.run({ "dbname" => "my_db_name", "user" => "my_db_username", "password" => "my_secret_password", "host" => "192.123.193.12", "socket" => "", "table_prefix" => "wp_", "site_prefix" => "", "clean_entities" => true, "comments" => true, "categories" => true, "tags" => true, "more_excerpt" => true, "more_anchor" => true, "extension" => "md", "status" => ["publish"] })'This resulted in all my pages and posted imported into the repository, and readyfor a long cleanup.ResourcesHere are some various links I’ve explored in finding a low-cost Jekyll basedhosting solution. DesignRope - Static Web Hosting: Who’s Best? Aerobatic Neocities Neocitizen - Used to public sitefrom a folder Neocities API Jekyll Docs - Wordpress Import Jekyll Docs - Deployment Methods James Ward - Jekyll on Heroku Netlify - Static hosting with a free entry tier Hugo - an alternative to Jekyll I assume Smashing Magazine - Static Website Generators Reviewed"
} ,
{
"title" : "Intro to Tmux",
"category" : "",
"tags" : "tmux, screen",
"url" : "/2016/06/tmux-intro/",
"date" : "2016-06-30 13:05:00 -0400",
"content" : "Recently I learned a few of the basic commands needed to usethe GNU screen command to keep a command line session running even afterI’ve disconnected from a remote VPS. I learned this specifically so that I couldkeep irssi running and logged into a specific IRC channel, so I couldreturn to the sessionand view the history of messages that I had missed.Recently I heard about Tmux as an alternative solution, and also discoveredthat it can also be used to maintain separate virtual terminals (windows), aswell as split the screen into separate “panes”. Splitting the screen into panescan also be done with GNU screen, but it’s not as well supported. Seereasons to use tmux instead of screen.InstallationInstalling for MacUse Homebrew to install tmux on a Mac OS X machine.brew install tmuxInstalling for Linuxsudo apt-get install tmuxFirst UseAfter you run tmux for the first time, you’ll notice that you are returned toa typical shell prompt, however there is now a green bar at the bottom of yourscreen.You are now operating within a tmux session. Within a session you can establishmultiple windows, with each window supporting multiple panes that display withinthe window.Sessions are like different work spaces. You can detach from a work space andthen drop into another session that you setup previously.PanesMany commands supported by tmux involve using a keystroke known as the ‘prefix’.By default this keystroke is CTRL + B, however you can configure tmux to usea different keystroke as the prefix.Pressing CTRL + B, followed by % will split the screen into two panesvertically. You can press CTRL + B (PREFIX), then one of the arrow keys toswitch between the panes. For instance PREFIX, then LEFT ARROW key will moveyour cursor to the left pane.Next if you type PREFIX then " (PREFIX - "), it will split the pane into twopanes horizontally. You can use PREFIX then UP ARROW or DOWN ARROW to switchbetween the horizontal panes.When you are inside of a specific pane, you can hold down the PREFIX keystrokeand tap one of the arrow keys multiple times to resize the pane.If you want to make a pane temporarily full screen, you can use PREFIX - z totoggle between full screen and original size.To close a pane you can simply use the exit command from the shell.WindowsUse PREFIX - c to open up a new window. You’ll now see that the status barat the bottom of the screen reflects the new windows existence. You can switchbetween the windows using PREFIX followed by the number of the window (e.g.PREFIX - 0 and PREFIX - 1).Notice that there is an asterisk (*) next to the window that you are currentlyviewing in the status bar, and a dash (-) next to the last window you wereviewing. PREFIX - l will allow you to switch between the current and lastwindow.SessionsA session will run in tmux until you end it. You can detach from a session byusing PREFIX - d. You can list sessions using tmux list-sessions ortmux ls.You can re-attach to the tmux session by using tmux attach. If you havemultiple sessions runing, you can use tmux attach -t 0, where -tmeans “target”, and 0 is the number for the session. You can also killsessions using a similar command: tmux kill-session -t 0.Alternatively you can name sessions when you start them usingtmux new -s session_name, and then use tmux attach -t session_name toreconnect to the session.Further Reading tmux cheatsheet Thoughtbot - A tmux Crash Course The Pragmatic Programmer - tmux"
} ,
{
"title" : "Unbricking TP-Link TL-WDR4300",
"category" : "",
"tags" : "wifi, tp-link, wdr4300, openwrt, mesh",
"url" : "/2016/04/tp-link-wdr4300-recovery/",
"date" : "2016-04-14 00:00:00 -0400",
"content" : "I tried to flash the TP-Link TL-WDR4300 router with a custom OpenWRT imagerecently, and after doing so I was unable to connect to the device like Iexpected.Here is how you can recover / un-brick the device.Install tftpdI’m using an Ubuntu machine. There are instructions for installing tftpd andxinetd, but these don’t seem to work with Ubuntu 14.04. If you’re using Windows,then I recommend this video.I found instructions on a Spiceworks.com forumthat worked for me.Install tftpd-hpa and tftp clientsudo apt-get install tftpd-hpa tftpEdit the TFTPd Configuration FileEdit /etc/default/tftpd-hpa to use the following configurationsudo vi /etc/default/tftpd-hpaTFTP_USERNAME="tftp"TFTP_DIRECTORY="/tftpboot"TFTP_ADDRESS="0.0.0.0:69"TFTP_OPTIONS="-s -c -l -vv"The -vv will cause the TFTP server to log out to /var/log/syslogCreate the TFTP DirectoryThe server will serve files from the /tftpboot directory. You need to createit and give it the proper permissions.sudo mkdir /tftpbootsudo chmod -R 777 /tftpbootsudo chown -R nobody /tftpbootRestart the servicesudo service tftpd-hpa restartTest the TFTP DaemonCreate a file in the directoryecho "this is a test" > /tftpboot/test.txtDownload the file via tftp to your home directory. If it downloads fine, yourtftp server is running.$ cd ~$ tftp localhosttftp> get test.txtReceived 16 bytes in 0.1 secondstftp> quit$ cat test.txtthis is a testDownload the Latest FirmwareDownload the firmware from the tp-link official support page to the /tmpdirectory and unzip the file.cd /tmpwget http://www.tp-link.us/res/down/soft/TL-WDR4300_V1_151104_US.zipunzip TL-WDR4300_V1_151104_US.zipCopy the binary image file to /tftpboot named as ``.cp wdr4300v1_en_us_3_14_3_up_boot\(151104\).bin /tftpboot/wdr4300v1_tp_recovery.binConfigure your MachineChange your machines IP address to 192.168.0.66. The router will try toconnect to this address and download the image from it in a future step. Thisrequires a subnet setting of 255.255.255.0, with a gateway of 0.0.0.0.Enable TFTP Recovery ModeTaken from OpenWRT - TP-Link TL-WDR4300 - Flashing via TFTP:Power up the TL-WDR4300 and as soon as the asterisk/star symbol to the right ofthe power/IO image starts to flash, hold down the WPS/Reset button that is onthe back of the device for about 10 seconds. The asterisk symbol should beginto flash much faster than before. Let the device continue to fit and do thisuntil it reboots.Wait for the firmware transfer (about 20s), firmware flash (about 90s) andsubsequent reboot (about 30s).You can tail the syslog to see if the router is actually interfacing with theTFTPd server.tail -F /var/log/syslogReferences DD-WRT Forum - How to unbrick TP-Link N750 WDR-4300 DD-WRT - TP-Link TL-WRT4300"
} ,
{
"title" : "Getting Started with IRSSI",
"category" : "",
"tags" : "irc",
"url" : "/2016/04/getting-started-with-irssi/",
"date" : "2016-04-09 00:00:00 -0400",
"content" : "Often open source projects or organizations use an IRC channel on FreeNode toprovide support to users and/or developers. I’m trying to retain familiaritywith the command line, rather than become completely dependent on GUIapplications, so I’ve decided to use IRSSI instead ofPidgin or Adium (Mac OS X).InstallationInstalling IRSSI is easy on Ubuntu, simply use:apt-get install irssiYou can also install it using Homebrew on Mac OSX:brew install irssiAfter it’s done installing, simply run the programirssiWindowsIRSSI support separate “windows” for the different channels you are connectedto, or for the different people you are chatting with. If for some reason youdo not see information on the screen for a command you’ve run, it may bedisplayed in another window.By default you can use the ALT key combined with a number key (e.g. ALT+1,ALT+2, ALT+3, etc) to switch between the different displays in IRSSI.If your terminal program doesn’t support this, or uses these key combinationsto switch between it’s own tabs (like Ubuntu terminal does), then you shoulduse the /window command instead./WINDOW NEW - Create new split window/WINDOW NEW HIDE - Create new hidden window/WINDOW CLOSE - Close split or hidden window/WINDOW HIDE [<number>|<name>] - Make the split window hidden window/WINDOW SHOW <number>|<name> - Make the hidden window a split window/WINDOW SHRINK [<lines>] - Shrink the split window/WINDOW GROW [<lines>] - Grow the split window/WINDOW BALANCE - Balance the sizes of all split windowsCommandsFor those that are new to IRC, it’s a good idea to become familiar withthe common commandsthat IRC programs support. Here are a few: /join #channel_name - Joins a channel /names - Lists names of users in the current channel /who <nickname> - Get info on user (shown in window 1) /me <action> - Announces some action such as /me waves hello /msg <nickname> <message> - Send a direct message to another user /ignore <nickname> - Block someone that is harassing you /quit or /exit - Quits IRC programThere are other commands that are supported only by IRSSI that can be found inthe IRSSI DocumentationIt’s recommended that you use the /help command to get the list of othersupported commands. You can get more information on each command by followingthe /help command with the name of the command. For instance you can get moreinformation on the /connect command by using /help connect.Getting StartedUpon opening the program for the first time IRSSI will connect to a defaultIRC network. There is a configuration file in ~/.irssi/config that you caninspect, but you can use commands from within the program to configure IRSSI toautomatically perform when you first open the program. This includes connectingto Freenode, authenticating using your registered nick name, and joining adefault channel.You can use these commands to get started immediately:Set your nick name and real name/set nick <nick>/set real_name <Real Name>Connecting to FreeNode/connect irc.freenode.net 8001Join Channel/join #ubuntuRegistering with FreeNodeThe FreeNode IRC network allows you to register your nickname and associateit with your email address. This is done by using the following commands:/msg nickserv REGISTER <password> <email>You should receive a message informing you that you need to check your emailaccount and obtain instructions to verify yourself.To make sure that your email address isn’t revealed to other users, use thefollowing command to ensure that it is hidden./msg NickServ SET HIDEMAIL ONYou can verify your information with the NickServ by using:/msg nickserv infoAutomatic ConfigurationConfigure IRSSI with Freenode network, then register the Freenode server youwill connect to via an SSL connection, then configure the automatically joinedchannel (#ubuntu in this example)./network add Freenode/server add -auto -ssl -ssl_verify -ssl_capath /etc/ssl/certs -network Freenode irc.freenode.net 7000/channel add -auto #ubuntu Freenode/saveAfter you’ve successfully registered your FreeNode nick name, you can run thiscommand to configure IRSSI to login automatically after connecting to FreeNode./network add -autosendcmd "/msg nickserv identify <password> ;wait 2000" Freenode"
} ,
{
"title" : "Recommended Gems",
"category" : "",
"tags" : "gems, ruby",
"url" : "/2016/03/recommended-gems/",
"date" : "2016-03-05 00:00:00 -0500",
"content" : "Here are some Gems we recommend that you checkout. Authentication / Authorization devise - Flexible authenticationsolution for Rails with Warden cancan - authorization library for Railswhich restricts what resources a given user is allowed to access pundit - Minimal authorization through OOdesign and pure Ruby classes HTTP Clients curb - Ruby bindings for libcurl httparty - Fun HTTP client for Ruby typhoeus - Multithreaded libraryfor accessing web services in Ruby Mac OSX lunchy - A friendly wrapper forlaunchctl, used to manage services. Great for managing daemons installed viaHomebrew, such as Postgres, Memcached, Redis, etc. Logging itslog - log formatter with colorsupport Rails Console pry - IRB alternative and runtime developerconsole pry-rails - Rails initializer forRails pry-remote - Connect to pryconsole remotely pry-nav - Binding navigation commandsfor Pry to make a simple debugger rbenv - Simple Ruby versionmanagement. Less intrusive than RVM. ORM audited - ORM extension thatlogs all changes to your Rails models Form Helpers simpleform - Alternative formhelpers, tied to a simple DSL, with no opinion on markup, with support forTwitter Bootstrap Configuration figaro - Facilitates storingsensitive configuration information for your app in a file not checked intothe repository, such as AWS keys, passwords, etc. Testing / Continuous Integration rspec - Alternative to Test::Unit. Easier to read.More specific test support (doesn’t couple view/controller testing) webmock - Library for stubbing andsetting expectations on HTTP requests in Ruby simplecov - SimpleCov is aCode coverage analysis toolfor Ruby 1.9+. fabrication - alternative to FactoryBotfor fixtures replacement. Github database_cleaner - Strategiesfor cleaning databases in Ruby. Can be used to ensure a clean state fortesting heckle - mutation tester thatmodifies your code and runs your tests to make sure they fail (like theyshould) cucumber-rails - RailsGenerators for Cucumber with special support for Capybara andDatabaseCleaner launchy - launches externalapplication from within ruby programs. Used to view state of virtual pagerenders with Cucumber/Capybara capybara - Acceptance test frameworkfor web applications capybara-webkit - Acapybara driver that uses WebKit via QtWebKit poltergeist - HeadlessJavascript engine driver for Capybara headless - Ruby wrapper forXvfb, the virtual framebuffer guard - command line tool to easily handleevents on file system modifications. Use ‘gem search -r ^guard-‘ to view themany plugins that work with guard to automate testing, asset building, etc. guard-rspec - automatically runyour specs after modifying models/spec files guard-pow - automatically manage Powapplications restart guard-cucumber - automaticallyrun your features Development capistrano - Used to deployRails applications to hosting environments such as a VPS capistrano-unicorn -Capistrano integration for Unicorn jasminerice - Pain freecoffeescript testing under Rails 3.1 "
} ,
{
"title" : "Looping through dictionaries in jinja2 templates",
"category" : "",
"tags" : "jinja2, ansible, templates",
"url" : "/2015/11/looping-through-dictionaries-in-jinja2-templates/",
"date" : "2015-11-05 05:10:32 -0500",
"content" : "I am adding a script to our server using Ansible. The roles are all setup tosupport multiple Wordpress websites based on the dictionary defined inansible/group_vars/wordpress_sites.yml, as my Ansible configuration is basedon Trellis.I don’t want to use theAnsible template moduleto create a script for every website, because really I only have one websiteconfigured. Sure I might have configuration files for each site under Nginx,so that makes sense. So I decided that instead of creating multiple scripts,I’ll just have Ansible generate scripting for each of the sites inside of myshell script.Well it turns out that this isn’t do easy for someone not very familiar withJinja2 templates or Python objects.At first I figured that I would simply loop through each element inside of the‘wordpress_sites’ dictionary like so:{% for site in wordpress_sites.values() %}echo "-------------------------------------------------------"echo "| Backing Up Assets and Database for {{ site.key }} |"echo "-------------------------------------------------------"echo ""read -s -p "MySQL Password for '{{ site.env.db_user }}': " mysqlpw# Other script code goes here for each site{% endfor %}Unfortunately when I’d run the script to generate this site I would get thiserror:TASK: [admin-scripts | Add Backup script] *************************************fatal: [45.79.167.52] => {'msg': "AnsibleUndefinedVariable: One or more undefined variables: 'str object' has no attribute 'env'", 'failed': True}fatal: [45.79.167.52] => {'msg': "AnsibleUndefinedVariable: One or more undefined variables: 'str object' has no attribute 'env'", 'failed': True}This was very frustrating, as I just expected each object to have yet anotherdictionary object as it’s value. Why am I getting a string object?After much experimentation (trial and error), I realized that I could traversethrough the dictionary manually.{{ wordpress_sites }}{{ wordpress_sites['mysite.example.com'].env }}{{ wordpress_sites['mysite.example.com'].env.db_user }}{{ wordpress_sites['mysite.example.com'].env.doesnt_exist }}This resulted in an error about no attribute ‘doesnt_exist’, so I knew that theglobal variable is there and accessible. So there must be something wrong withthe loop that is converting the value to a string. I foundan articlethat used dict.values(), implying that you could call values() on adictionary, and it would return the values.I guess that a for loop inside of a jinja2 template expects a list, not adictionary.{% for site in wordpress_sites.values() %}echo "-------------------------------------------------------"echo "| Backing Up Assets and Database for {{ site.key }} |"echo "-------------------------------------------------------"echo ""read -s -p "MySQL Password for '{{ site.env.db_user }}': " mysqlpw# Other script code goes here for each site{% endfor %}This still left me unable to access the key that was used to represent the site,which is needed by my script. I testeda configuration that iterated with key and valueavailable inside of the code block.{% for site_name, site in wordpress_sites.iteritems() %}echo "-------------------------------------------------------"echo "| Backing Up Assets and Database for {{ site_name }} |"echo "-------------------------------------------------------"echo ""read -s -p "MySQL Password for '{{ site.env.db_user }}': " mysqlpwecho ""{% endfor %}This resolved my issue.I wonder if this is the type of thing that’s a no brainer to a Python developer.Probably."
} ,
{
"title" : "Vagrant SSH Failure - Connection closed by remote host",
"category" : "",
"tags" : "",
"url" : "/2015/09/vagrant-ssh-failure-connection-closed-by-remote-host/",
"date" : "2015-09-10 22:21:52 -0400",
"content" : "I recently was running into issues with Vagrant where I’d start the virtualmachine using the ‘vagrant up’ command, but I’d receive an error when trying touse vagrant ssh.$ vagrant sshssh_exchange_identification: Connection closed by remote hostI’m using a Vagrant/Ansible configuration based on roots/trellis.I tried to look into the issue further by running the command with verbose output.vagrant ssh -- -vvvI noticed that it’s trying to use 127.0.0.1 to SSH into the VM on port 2222.When I try to SSH manually using ssh vagrant@192.168.50.5 -p 2222 it worksfine, but with 127.0.0.1 I get the error still. It seemed that connecting to theVM from 127.0.0.1 is triggering some sort of block.I tried to check /var/log/syslog and /var/log/auth.log (with SSHD configured forverbose mode). I don’t see any log for the failed attempt in the auth.log,though I do see a normal login. It doesn’t seem like the connection is beingblocked.I reported this issue to the roots/trellis project - #348After further investigation I realized that I was configuring sshd on the virtualmachine to use port 2222, when really Vagrant or Virtualbox was responsible forforwarding port 2222 on localhost to port 22 of the VM. I was essentially makingthe port forwarding that it configures invalid by changing SSHD to listen onport 2222."
} ,
{
"title" : "Error when building PhantomJS 2.0",
"category" : "",
"tags" : "phantomjs",
"url" : "/2015/08/error-when-building-phantomjs-2-0/",
"date" : "2015-08-08 04:34:12 -0400",
"content" : "I was tasked with installing PhantomJS 2.0 on an Ubuntu 14.04 VPS running with2 GB of RAM. Online discussions onGithub andGoogle Groupsseemed to have pointed to the build process requiring much RAM to completewithout error.g++: internal compiler error: Killed (program cc1plus)Please submit a full bug report, with preprocessed source if appropriate.See <file:///usr/share/doc/gcc-4.8/README.Bugs> for instructions.make[2]: *** [.obj/inspector/InspectorAllInOne.o] Error 4make[2]: *** Waiting for unfinished jobs....make[2]: Leaving directory `/home/app/src/phantomjs-2.0.0/src/qt/qtwebkit/Source/WebCore'make[1]: *** [sub-Target-pri-make_first-ordered] Error 2make[1]: Leaving directory `/home/app/src/phantomjs-2.0.0/src/qt/qtwebkit/Source/WebCore'make: *** [sub-Source-WebCore-WebCore-pro-make_first-ordered] Error 2To overcome this error I checked the build.sh script and found that the scriptwould discover the number of CPU cores you’re running on a machine and thus runthat many concurrent build processes, thus using more memory. To overcome thisyou can run the script with the number of jobs specified../build.sh --jobs 1It turns out that this wasn’t anything new. Theofficial build instructions actually adviseto use the --jobs 1 argument, I just missed this because I downloaded the ZIPfile before proceeding. I did however find out that the ZIP file they have youdownload still results in error, whereas pulling the source from the ‘2.0’branch of Github is building much better now."
} ,
{
"title" : "Setup Environment for Django Development",
"category" : "",
"tags" : "python, django, virtualenv, pip",
"url" : "/2015/02/setup-environment-for-django-development/",
"date" : "2015-02-02 03:26:28 -0500",
"content" : "Although this website is primarily devoted to Ruby / Rails development, I’vefound it necessary to learn Python for a new position I might take in theupcoming year. Here is my guide for setting up your local workstation for Python/ Django development on a Mac OS X workstation.HomebrewThe first step is to ensure that you have Homebrew installed, which is a packagemanager for Mac OS X that installs various software packages that are ported forMac OS X.To install Homebrew run the following from your Terminals command line:ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"Once this is installed you should run ‘brew doctor’ and make sure that it’ssetup properly. Usually I find that I have to make sure that /usr/local/bin isthe first path shown in /etc/paths. You can edit this using nano from thecommand line.sudo nano /etc/pathsYou’ll likely also have to run brew update. Once brew doctor reports ‘Yoursystem is ready to brew’, you can move forward.For development it’s important to install software packages that are providedby Homebrew, so that all the executables and libraries you are using areprovided by Homebrew, and thus not conflicting with system libraries. Homebrewinstalls executables in /usr/local/bin, which is configured to be your primarypath. This ensures that when you try to run a command it uses the Homebrewexecutable and libraries rather than the default executables and librariesprovided by Mac OS X.PythonThe next step is to install Python. By default Python v2.7.6 is alreadyavailable for Mac OS X (Yosemite), however certain programs may rely on thisversion of Python to run on your system. By installing Python via Homebrew, itwill depend on other dependencies installed by Homebrew.This command will install both python version 2 and 3.brew install python python3After this is finished you can use ‘which’ to see which Python executables arepresent in your environment by default.$ brew install python python3$ which python/usr/local/bin/python$ which python3/usr/local/bin/python3As you can see, the Homebrew versions of Python will be used when you use these commands.VirtualEnv and VirtualEnvWrapperPython comes with a package manager called Pip that installs Python librariesfrom the PyPI (Python Package Index). Bydefault, this library installs packages globally for the version of Python youare using. For instance for Python v2, you would use ‘pip’, and for Python v3,you would use ‘pip3’ to install Python packages.$ which pip/usr/local/bin/pip$ which pip3/usr/local/bin/pip3These packages are installed globally, and available across all your projects.This can be convenient, but it can also become a problem. For instance, oneproject might require one version of Django, while another project requiresanother one be installed as the primary version.In the Ruby community this is where RVM or rbenv have been used to isolate theenvironment in use when you’re running a specific Ruby application, with anisolated RubyGem gem set.In the Python community the preferred tool is VirtualEnv and VirtualEnvWrapper.These are both Python tools that will need to be installed globally.pip install virtualenv virtualenvwrapperNext you’ll want to make a directory to store your virtual environments under.To keep these hidden we’ll create a hidden directory under your home directory.mkdir ~/.virtualenvsNext add the following to your .profile file in your home directory.export WORKON_HOME=$HOME/.virtualenvssource /usr/local/bin/virtualenvwrapper.shexport PIP_VIRTUALENV_BASE=$WORKON_HOMEalias workoff='deactivate'You can now create a new project to work in using the following command:$ mkvirtualenv mydjangoappNew python executable in mydjangoapp/bin/python2.7Also creating executable in mydjangoapp/bin/pythonInstalling setuptools, pip...done.Next, to work in this virtual environment, use the ‘workon’ command like so:workon mydjangoappTo exit the virtual environment you are in, simply use the ‘deactivate’ or‘workoff’ command. You can remove virtual environments using the ‘rmvirtualenv’command.To create a virtual environment using the Homebrew version of Python 3, usethis command:mkvirtualenv -p /usr/local/bin/python3 mydjangoappPostgresTypically I’d use MySQL, but it looks like the open source community isrecommending adoption of Postgres. For instance, Heroku doesn’t support MySQLby default, as they found it more portable than MySQL databases.Run the following to install Postgres, set it up to be started automatically bythe system daemon launcher (launchd), and then start the service immediately.brew install postgresln -sfv /usr/local/opt/postgresql/*.plist ~/Library/LaunchAgentslaunchctl load ~/Library/LaunchAgents/homebrew.mxcl.postgresql.plistNow that the Postgres server is up and running, we need to establish an emptydatabase and a user with all permissions for that database.The Postgres config file is located in /usr/local/var/postgres/pg_hba.conf.Psycopg2Django requires the Psycopg2 library to connect with Postgres databases. Go intoyour virtual environment and install this package as well as the Django package.workon mydjangoapppip install psycopg2 djangoDjangoNow we’re ready to create our first Django application.django-admin.py startproject mydjangoappInside of the project folder you just created will be another folder with thesame name (mydjangoapp). To Python, this simply looks like a Python module.Django doesn’t care what the name of the outer folder is, just as long as theapp folder within it holds the correct name and configuration files.After you’ve created the new application folder, change into it’s directoryand run the following to start the Django development server.cd mydjangoapppython manage.py runserverIf you want to run this server on another IP address or port this is possible.See runserver reference.python manage.py runserver 8080python manage.py runserver 0.0.0.0:8000Database SetupWe need to create a database, and then a user with permissions to use thedatabase in PostgreSQL. Typically you would run sudo as the ‘postgres’, butPostgres was installed by Homebrew to run as you, so you’re the Postgres adminuser.$ createdb mydjangoapp$ createuser -SDR mydjangoapp$ psql -d postgres -c "ALTER USER mydjangoapp WITH PASSWORD 'devpass';"ALTER ROLE$ psql -d postgres -c "GRANT ALL PRIVILEGES ON DATABASE mydjangoapp to mydjangoapp;"GRANTInside of your application folder you’ll see a ‘settings.py’ file. This fileholds various settings for your Django application, which technically is aPython module. By default Django uses SQLite for the database, however we’regoing to use PostgreSQL.This requires that we change the keys in the DATABASES ‘default’ item insideof settings.py.DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), }}For our needs change this entry to reflect the following:DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': 'mydjangoapp', 'USER': 'mydjangoapp', 'PASSWORD': 'devpass', 'HOST': '', 'PORT': '', }}After saving these changes, run the following command to have Python create theneeded tables inside of the database:$ python manage.py syncdbOperations to perform: Apply all migrations: admin, auth, sessions, contenttypesRunning migrations: Applying contenttypes.0001_initial... OK Applying auth.0001_initial... OK Applying admin.0001_initial... OK Applying sessions.0001_initial... OKYou have installed Django's auth system, and don't have any superusers defined.Would you like to create one now? (yes/no): yesUsername (leave blank to use 'myusername'):Email address: myusername@example.comPassword: ******Password (again): ******Superuser created successfully.Now you’re ready to start building your application. You can start by generatinga model using the following command.python manage.py mydjangoapp modelnameThis is where this guide leaves off. You can continue your experimentation withbuilding a Django application by following theWriting your first Django app, part 1 tutorial from the Creating Modelssection."
} ,
{
"title" : "Issues with RVM after upgrade to OS X Mavericks",
"category" : "",
"tags" : "",
"url" : "/2014/10/issues-with-rvm-after-upgrade-to-os-x-mavericks/",
"date" : "2014-10-02 00:00:00 -0400",
"content" : "So I just upgraded to OS X Mavericks (10.9.5). I also upgraded to X Code 6, andalso installed the command line tools via the xcode-select --install command.I also have the ‘apple-gcc42’ Homebrew package installed to provide GCC 4.2.Still however, when I would try to install a version of Ruby via RVM, I wouldget this error:$ rvm install ruby-1.9.3ruby-1.9.3-p547 - #removing src/ruby-1.9.3-p547 - please waitSearching for binary rubies, this might take some time.No binary rubies available for: osx/10.9/x86_64/ruby-1.9.3-p547.Continuing with compilation. Please read 'rvm help mount' to get more information on binary rubies.Checking requirements for osx.Certificates in '/usr/local/etc/openssl/cert.pem' are already up to date.Requirements installation successful.Installing Ruby from source to: /Users/jasonmiller/.rvm/rubies/ruby-1.9.3-p547, this may take a while depending on your cpu(s)...ruby-1.9.3-p547 - #downloading ruby-1.9.3-p547, this may take a while depending on your connection...ruby-1.9.3-p547 - #extracting ruby-1.9.3-p547 to /Users/jasonmiller/.rvm/src/ruby-1.9.3-p547 - please waitruby-1.9.3-p547 - #applying patch /Users/jasonmiller/.rvm/patches/ruby/GH-488.patch - please waitruby-1.9.3-p547 - #configuring - please waitError running './configure --prefix=/Users/jasonmiller/.rvm/rubies/ruby-1.9.3-p547--with-opt-dir=/usr/local/opt/libyaml:/usr/local/opt/readline:/usr/local/opt/libksba:/usr/local/opt/openssl--without-tcl --without-tk --disable-install-doc --enable-shared', showing last 15 lines of /Users/jasonmiller/.rvm/log/1412286075_ruby-1.9.3-p547/configure.logconfigure: WARNING: unrecognized options: --without-tcl, --without-tkchecking build system type... i386-apple-darwin13.4.0checking host system type... i386-apple-darwin13.4.0checking target system type... i386-apple-darwin13.4.0checking whether the C compiler works... noconfigure: error: in `/Users/jasonmiller/.rvm/src/ruby-1.9.3-p547':configure: error: C compiler cannot create executablesSee `config.log' for more detailsThere has been an error while running configure. Halting the installation.I was able to get it installed and running fine by specifying for RVM to usethe clang compiler.rvm install 1.9.3 --with-gcc=clangNote: Had an issue with Canvas installation, which involved some C codedisplayed, but it turns out that this was due to anissue with the Thrift gemon Mavericks."
} ,
{
"title" : "Bypassing the AngularJS router for anchor tags",
"category" : "",
"tags" : "AngularJS, ngRoute, routing, anchor, CSV download",
"url" : "/2014/09/bypassing-the-angularjs-router-for-anchor-tags/",
"date" : "2014-09-18 22:19:44 -0400",
"content" : "I’m working with a Rails application that is using an AngularJS front-end. Weare using routing to override the behavior of anchor tags to ensure that theyload other templates with controllers, as defined in our routeConfiguration.js.This works out great most of the time, unless you need to override the routingso that your anchor tag can point to an end-point served by your Rails back-end.In my case, I’m linking to an end-point that serves a named CSV file. Withoutany sort of over-ride, I was finding that the default fallback behavior definedby the otherwise() method was occurring. In my case this was a 404 pagetemplate that loaded.After much searching, the following solution was found. All you have to do issimply add a target attribute to the anchor tag with the value “_self”.<a href="/download.csv" target="_self">Download CSV File</a>Much thanks to BJ Basañes for posting this solution. I’m re-posting it here so that it’s not lost forever, and also so that I can lend additional keywords to lead others to this solution.Update: It turns out that this is mentioned in thedocumentation for $location."
} ,
{
"title" : "Sharing Administrative Rights with Homebrew",
"category" : "",
"tags" : "homebrew, mac-osx, permissions",
"url" : "/2014/06/sharing-administrative-rights-with-homebrew/",
"date" : "2014-06-29 21:52:42 -0400",
"content" : "I installed Homebrew on my work computer, and have installedmany ports using Homebrew from an account on my machine. This has resulted inall of the files and folders managed by Homebrew being owned by the user accountI installed the ports from, with ‘admin’ group ownership.Recently I created another account on my machine, logged into it, and ran‘brew doctor’ just to make sure everything was in excellent order, and I raninto these errors:$ brew doctorWarning: /usr/local/etc isn't writable.This can happen if you "sudo make install" software that isn't managed byby Homebrew. If a brew tries to write a file to this directory, theinstall will fail during the link step.You should probably `chown` /usr/local/etcWarning: /usr/local/include isn't writable.This can happen if you "sudo make install" software that isn't managed byby Homebrew. If a brew tries to write a file to this directory, theinstall will fail during the link step.You should probably `chown` /usr/local/includeWarning: /usr/local/lib isn't writable.This can happen if you "sudo make install" software that isn't managed byby Homebrew. If a brew tries to write a file to this directory, the install will fail during the link step.You should probably `chown` /usr/local/libWarning: /usr/local/lib/pkgconfig isn't writable.This can happen if you "sudo make install" software that isn't managed by by Homebrew. If a brew tries to write a file to this directory, the install will fail during the link step.You should probably `chown` /usr/local/lib/pkgconfigWarning: /usr/local/share isn't writable.This can happen if you "sudo make install" software that isn't managed by by Homebrew. If a brew tries to write a file to this directory, the install will fail during the link step.You should probably `chown` /usr/local/shareWarning: Some directories in /usr/local/share/man aren't writable.This can happen if you "sudo make install" software that isn't managed by Homebrew. If a brew tries to add locale information to one of these directories, then the install will fail during the link step.You should probably `chown` them: /usr/local/share/man /usr/local/share/man/man1 /usr/local/share/man/man3 /usr/local/share/man/man5 /usr/local/share/man/man7 /usr/local/share/man/man8It became clear that this is happening because these files and folders are ownedby my other user account. I see that they’re associated with the ‘admin’ group,so I just figured that I needed to add my new account to the ‘admin’ group. Idid this using the following command:sudo dseditgroup -o edit -a usernametoadd -t user adminStill this command did not resolve my issue. Further investigation showed thatmy account was created as an Administrator, so I should be in this ‘admin’group already.I found many articles online that suggested other various things, includingadding a ‘brew’ group and changing all the files to be owned by this group. Idon’t recommend this, because Homebrew is already using an appropriate group,‘admin’, as the default. Homebrew updated the package manager touse the ‘admin’ group forall files/folders setup under /usr/local back in 2011.I looked at the group permissions of the files/folders and noticed that thegroup did not have write permission. I used the following commands, taken froma StackExchange post, which resolved my issue.chmod -R g+w /usr/localchmod -R g+w /Library/Caches/HomebrewUpdate 1This morning I went back to work and found that my Postgres server was notrunning. I checked the logs and found this error:$ tail -F /usr/local/var/postgres/server.logLOG: shutting downLOG: database system is shut downFATAL: data directory "/usr/local/var/postgres" has group or world accessDETAIL: Permissions should be u=rwx (0700).FATAL: data directory "/usr/local/var/postgres" has group or world accessDETAIL: Permissions should be u=rwx (0700).FATAL: data directory "/usr/local/var/postgres" has group or world accessDETAIL: Permissions should be u=rwx (0700).FATAL: data directory "/usr/local/var/postgres" has group or world accessDETAIL: Permissions should be u=rwx (0700).This resulted from the permissions change I made. Changing the permissions asadvised resolved the issue.$ chmod 700 /usr/local/var/postgres$ ls -la /usr/local/var/total 0drwx-w---- 9 johnsmith admin 306 Jun 20 15:31 .drwxrwxr-x 19 root admin 646 Jun 2 11:15 ..drwxrwxr-x 3 johnsmith admin 102 Oct 8 2013 dbdrwxrwxr-x 4 redconfetti admin 136 Jun 20 15:31 logdrwxrwxr-x 6 johnsmith admin 204 Jun 20 15:47 mongodbdrwxrwxr-x 8 johnsmith staff 272 Oct 9 2013 mysqldrwx------ 21 johnsmith admin 714 Jun 27 17:45 postgresdrwxrwxr-x 2 johnsmith admin 68 Oct 8 2013 run-rw-rw-r-- 1 johnsmith admin 0 Feb 25 16:45 stdoutUpdate 2It appears that these instructions also caused issues with the permissions ofthe plist files, which store the configuration of the services launched bylaunchd. The plist files for Memcached and Redis are symlinked in/Users/myuser/Library/LaunchAgents/ to their location to files like/usr/local/opt/service-name/homebrew.mxcl.service-name.plist. If these havethe wrong permissions, launchd will not use those configurations to launch theservice.lunchy start redislaunchctl: Dubious permissions on file (skipping): /Users/johnsmith/Library/LaunchAgents/homebrew.mxcl.redis.plistnothing found to loadstarted homebrew.mxcl.redisEven though it said that Redis was started, it was not actually started. Hereis the command I ran to resolve this:sudo chmod 644 /Users/johnsmith/Library/LaunchAgents/homebrew.mxcl.memcached.plistsudo chmod 644 /Users/johnsmith/Library/LaunchAgents/homebrew.mxcl.redis.plistI’m starting to think that there isn’t an ideal solution to this issue. Now myplist files cannot be updated by Homebrew."
} ,
{
"title" : "InstructureCon Hack Day",
"category" : "",
"tags" : "canvas, instructure, lti-integration",
"url" : "/2014/06/instructurecon-hack-day/",
"date" : "2014-06-17 22:40:05 -0400",
"content" : "Disclaimer: The opinions or statements expressed herein should notbe taken as a position of or endorsement by the University of California,Berkeley.I’m currently at InstructureCon attending the “Hack Day” event, which is simplyan event where any developers wishing to integrate their systems with Canvascan ask questions, talk to Canvas developers, etc.Here are some things I’ve clarified with their developers thus far, thanks toEric Berry and Brian Palmer (codekitchen.LTI Template BuilderEric Berry (cavenb) is part of the DeveloperSupport team at Instructure, which is a group that develops guides and toolsto help developers integrate their systems with Canvas. He informed me of theLTI Template Builder, whichprovides a command you can use to generate a Ruby on Rails engine that providesa template for the LTI application type of your choice.Canvas PluginsIn the Coding Guidelines documentation for Canvas-LMS, the possibility ofdeveloping a plugin for Canvas is mentioned. The documentation mentions“Plugins can be registered at runtime but only appear in the interface forenabled root accounts”. I had assumed this meant thatplugins could be enabled for a primary account (our account), but this was awrong assumption. This only means that Site Admins for a Canvas instance, suchas Canvas employees that are the root Administrators for the Cloud hosted Canvasservice, are able to manage / activate these plugins. For Canvas to introducesuch a plugin to their cloud hosted service, the plugin would need to providefunctionality that benefits all institutions, without conflicts.For instance, the Adobe Connect plugin for web conferencing was developed byOCAD University. Any proposed plugins that are developed would need to meetsimilar criteria of use across institutions.Custom Javascript and CSS Application LogicWith the cloud hosted Canvas service, the custom Javascript and CSS filesspecified under your account settings are applied to the pages ONLY when ahostname associated with your account is in use. For instance, UC Berkeley useshttp://bcourses.berkeley.edu/ with the Canvas Cloud hosted service.When using a local Canvas instance, you’ll likely use http://localhost:3000/,and thus the Javascript and CSS from an account may be applied to other accountsthat are not configured with a custom Javascript and CSS configuration.Canvas Refresh IntervalCanvas refreshes the configuration/data for their Beta and Test systems on aspecific schedule. I got some details on what this schedule is: Beta - Refresh every Sunday afternoon Test - Refresh every 3 weeks on Sunday afternoonCustom Javascript and CSS APIEvery time the Beta and Test Canvas instances are updated, we have to manuallyupdate the Javascript and CSS configuration for our account so that they pointto the Development and QA instances of our LTI application server. Luckily I wasjust informed that an API is coming soon that will make it possible to updatethese URLs.Also a new option will soon be supported to store the Javascript and CSS codewithin Canvas, instead of pointing to an external URL that may go offline.Using RBEnv or RVM with CanvasI noted to Brian Palmer that when I specify a .ruby-version or .ruby-gemset filewith my local Canvas instance, the files show up expecting to be staged in theGit repository. I noted that these files aren’t configured in the .gitignorefile in the Canvas-LMS repository.He informed me that this intentional. Canvas developers are expected toconfigure these files as ignored globally for Git."
} ,
{
"title" : "Strong Parameters with Spree Extensions",
"category" : "",
"tags" : "Spree",
"url" : "/2014/04/strong-parameters-with-spree-extensions/",
"date" : "2014-04-20 04:05:29 -0400",
"content" : "I’m currently working on an extension for Spree, an e-commerce solution forRuby on Rails applications. The developer documentation for Spree is veryhelpful, letting developers know that they should use certain Rubymeta-programming methods to extend the functionality of the Spree system. Theextension I’m working on was setup under a version of Spree that used Rails 3.Now that Spree v2.2.1 uses Rails 4.0.4, I’m having to refactor some parts ofthis extension to adapt to new practices.From Accessible Attributes to Strong ParametersUnder Rails 3, you have to use #attr_accessible to ensure that attributes of amodel can be updated via methods such as #update_attributes. This wasimplemented to protect models from mass assignment vulnerability. In Rails 4,this functionality re-implemented as a convention at the controller level, in afeature known as Strong Parameters. ThisRails 4 Quick Look: Strong Parameters article explains this clearly.Rails 3 DecoratorsUnder Rails 3, a Spree extension could introduce new columns to Spree models viaa migration, and then simply introduce a decorator like this one to make the newattributes available for mass assignment updates.Spree::Order.class_eval do attr_accessible :my_extensions_attribute, :my_extensions_attribute2endBut now with Rails 4, I have to create a decorator forSpree::Api::OrdersController instead. I imagined that this decorator will haveto somehow apply a call to the #permit method on the ‘params’, allowing myextension attributes to be updated as well.After some searching online I realized that the best solution to this problem isto specify an alias_method_chain inside my decorator. We don’t have controlover the Spree code, and there is no option for using ‘super’ becauseinheritance isn’t involved here. So this is definitely a situation where weshould use an alias method chain.Spree Controller Helpers for Strong ParametersI just noticed however that the Spree::Api::OrdersController#order_paramsmethod has a more complex method for permitting the attributes than I expected.In this case the order attributes are provided bySpree::Api::OrdersController#permitted_order_attributes, which makes a ‘super’call that refers to the parent controller Spree::Api::BaseController. TheBaseController doesn’t have a #permitted_order_attributes method defined,however it does include Spree::Core::ControllerHelpers::StrongParameters.which defines #permitted_order_attributes. If you follow the dependenciesfurther, you’ll see that all these methods inSpree::Core::ControllerHelpers::StrongParameters rely onSpree:PermittedAttributes.So all that is necessary to define a new Spree::Order attribute is to define aSpree::PermittedAttributes decorator like so:# lib/spree/permitted_attributes_decorator.rbSpree::PermittedAttributes.class_eval do @@checkout_attributes.push(:my_extensions_attribute, :my_extensions_attribute2)endI’ll have to test this out, but it seems like the plausible approach. I hopethis helps any other developers.UpdateI just went back to a StackOverflow article on this subject that I had seenbefore - Rails 4 - strong parameters concept involvement in spree-2.1. Itturns out that they referenced a simpler approach by simply placing thefollowing into an initializer.Spree::PermittedAttributes.user_attributes.push :first_name, :last_nameI don’t think this is the best approach however, because an initializer has tobe installed into the application via a generator. A generated initializercannot be maintained either."
} ,
{
"title" : "Ruby Class Name",
"category" : "",
"tags" : "",
"url" : "/2014/03/ruby-class-name/",
"date" : "2014-03-21 00:32:23 -0400",
"content" : "I noticed that in a module used on the CalCentral project that loggerexpressions used in a module referenced ‘self.name’ many times. I checkedApiDock.com for a reference to this class in the Ruby or Rails documentation,but I couldn’t find one. The module itself didn’t define a #name method, so Iwas perplexed.The module I was inspecting is meant to be used to extend other classes, meaningthat it establishes the methods as class methods. It turns out that the‘Class’ class is officially documented as having a #name method that returnsa string version of the class name. This is a valid way of logging which classthe log message originates from."
} ,
{
"title" : "Using 'for in' in Javascript",
"category" : "",
"tags" : "javascript, JsLint",
"url" : "/2014/03/using-for-in-in-javascript/",
"date" : "2014-03-18 05:04:24 -0400",
"content" : "Today our lead front-end developer pointed out to me that when using a ‘for in’loop in Javascript that you want to make sure to use hasOwnProperty() on theelement to make sure it belongs to the object, and not properties that wereinherited through the prototype chain.More information is available on this page describing common Javascript codemistakes caught by JSLint."
} ,
{
"title" : "How to 'head' a text file in Ruby",
"category" : "",
"tags" : "ruby, head",
"url" : "/2014/01/how-to-head-a-text-file-in-ruby/",
"date" : "2014-01-31 01:51:32 -0500",
"content" : "I wanted to just view the first 20 lines of a 10,000 line CSV file returned byan API in a Ruby on Rails project I’m working on. Here is the chain of Rubycommands I came up with to effectively ‘head’ the CSV document returned.>> csv = "first line\nsecond line\nthird line\nfourth line\nfifth line\nsixth line\n">> csv.split("\n")[0..3].join("\n")=> "first line\nsecond line\nthird line\nfourth line""
} ,
{
"title" : "Objective C Notes",
"category" : "",
"tags" : "",
"url" : "/2014/01/objective-c-notes/",
"date" : "2014-01-09 10:46:44 -0500",
"content" : "I’m exploring Objective C right now. There are some things that I notice and amcurious about, so I’m going to note what I find here.ArgC and ArgVMany tutorials will likely start you off using XCode to create a command lineapplication. The ‘main’ function with Objective C is defined like so:int main(int argc, const char * argv[]){ @autoreleasepool { // insert code here... NSLog(@"Hello, World!"); } return 0;}I wondered what the two parameters for the main function represent. It turns outthat “argc” means “argument count”. It signifies how many arguments are beingpassed into the command line tool you are creating. “argv” means “argumentvalues”, and is a pointer to an array of characters, otherwise representing thestring of arguments.Objective-c main routine, what is: int argc, const char * argv[]Objective C File ExtensionI wondered why Objective C files end with ‘.m’ instead of ‘.oc’ or somethinglike that. The inventor of Objective C, Brad Cox, has indicated it’s because .oand .c were already taken by the C language. It is said that the ‘m’ stands for‘messages’, and that some call them ‘method files’.See Why do Objective C files use the .m extension?"
} ,
{
"title" : "Recommended Sublime 3 Packages",
"category" : "",
"tags" : "sublime text, lint",
"url" : "/2013/12/recommended-sublime-3-packages/",
"date" : "2013-12-17 20:39:00 -0500",
"content" : "If you haven’t already switched to Vim, and you’re hacking everything out fromthe command line, you might want to check out Sublime Text 3. Sublime Text issupported for Mac, Ubuntu, and Windows.Once you’ve obtained a copy of Sublime Text 3, make sure you installPackage Control by wbond. Using the SHIFT + COMMAND + P keystroke providesyou with a whole menu of options to choose from. Here are packages that arehighly recommended for use with Sublime Text 3. DocBlockr EditorConfig GitGutter - Indicates linesthat have been added, modified, or removed in the files you are viewing. LiveReload Markdown Preview SideBarEnhancements Sass SublimeLinter - Helps todetect mistakes in your code. Many packages to support various languages. SublimeLinter-csslint SublimeLinter-html-tidy SublimeLinter-jshint SublimeLinter-json SublimeLinter-ruby SublimeCodeIntel -Function call tooltips, code complete, and jump to file and line of certainsymbols TrailingSpaces -Highlights unnecessary whitespace in your documents, and removes thewhitespace when saving the document."
} ,
{
"title" : "Setting up PostgreSQL for Rails",
"category" : "",
"tags" : "postgresql",
"url" : "/2013/11/setting-up-postgresql-for-rails/",
"date" : "2013-11-21 23:14:34 -0500",
"content" : "I’ve always used either SQLite (the default) with new Rails projects, or I’veused MySQL because I’ve been using it ever since 2002 when I started doing webdevelopment with PHP. Recently however I was challenged with deploying anapplication to Heroku as part of a code challenge I’m taking part in.Unfortunately, Heroku doesn’t support SQLite, and recommends PostgreSQL. Ratherthan waste time trying to create a MySQL app and running into problems, I’mgoing to go the easy route and use PostgreSQL.The first step I had to take was installing PostgreSQL using Homebrew. Ifigured it would default to using ‘root’ as the super user locally without apassword, just like MySQL. Postgres is actually setup owned by your local useraccount though. This makes sense given that usually a daemon is setup to run asa certain user. Unix admins should create a ‘postgres’ user, login to thataccount, then initialize and run the database as that user.I find that it’s useful to control when the Postgres server is running using theLunchy gem. It makes it easy to start and stop daemons such as this that areinstalled via Homebrew.gem install lunchylunchy start postgreslunchy stop postgresCreating a User for your Rails AppYou likely don’t want to configure your Rails app to use your username indevelopment. It’s best to use a username that is related to your application.The following command will let you add a super user/role to Postgres with theability to create databases, create other user/roles, and login.createuser --superuser --createrole --createdb --login myappuserIf you need to delete a user/role, you can use ‘dropuser’.dropuser myappuserNow that a super user is setup, you can run the rake commands to create thedatabase and run migrations.bundle exec rake db:create:allbundle exec rake db:migratebundle exec rake db:migrate RAILS_ENV=testCommand Line ClientThe command line client is ‘psql’. It defaults to using your actual Linux/Macusername, which is fine because Postgres is running under your username locallyanyway. It requires a database name also, so you’ll have to specify a databasename to even get the prompt. You can use ‘postgres’ as the database name.Otherwise use your applications database name if you want.$ psql postgrespsql (9.2.4)Type "help" for help.postgres=#Also ‘quit’ or ‘exit’ don’t get you out of the client. You have to use ‘\q’. Youcan use the ‘help’ command as prompted to see other commands.If you want to view a list of users directly from the ‘postgres’ database, youcan use the following query.SELECT * FROM pg_roles;To view a list of databases, you can directly use the psql command from thecommand line.psql -l"
} ,
{
"title" : "ComputerName: not set",
"category" : "",
"tags" : "oh-my-zsh",
"url" : "/2013/10/computername-not-set/",
"date" : "2013-10-03 02:20:22 -0400",
"content" : "I recently installed Oh-my-Zsh on a new Macbook Pro running Mountain Lion.When I opened up my terminal, I received the message “ComputerName: not set”.I tried to use the ‘sudo hostname’ command, but this didn’t seem to work. Iended up opening System Preferences -> Sharing, and then set my Computer Name.I found an article that also suggested using the following:sudo scutil --set ComputerName "newname"sudo scutil --set LocalHostName "newname"sudo scutil --set HostName "newname""
} ,
{
"title" : "Bundler Definitions",
"category" : "",
"tags" : "bundler",
"url" : "/2013/09/bundler-definitions/",
"date" : "2013-09-01 23:16:38 -0400",
"content" : "I’m currently starting work on a Ruby gem,Github profile for Annotate Gemfile, that will grab the title, description,homepage URL, or source URL for every defined gem, and then add them as anannotation / commented for each gem definition in the Gemfile.As part of this exploration, I’m digging into the source for Bundler to try anunderstand how it imports details on the gems from the Gemfile, how it queriesfor details on each from RubyGems. Here are some discoveries I’m making.Bundler.definitionThis appears to return an initialized object with the Gemfile definitionsloaded. Bundler.definition returns an instance of Bundler::Definition for thedefault Gemfile and default lock file (Gemfile.lock). Readable attributes ofthis “definition” are platforms, sources, ruby_version, and dependencies.>> Bundler.definition.platforms=> ["ruby"]>> Bundler.definition.sources=> [source at ., rubygems repository https://rubygems.org/]>> Bundler.definition.ruby_version=> nil>> Bundler.definition.dependencies=> [<Bundler::Dependency type=:runtime name="annotate_gem" requirements=">= 0">, <Bundler::Dependency type=:development name="bundler" requirements="~> 1.3">, <Bundler::Dependency type=:development name="rake" requirements=">= 0">, <Bundler::Dependency type=:development name="rspec" requirements="~> 2.14.1">]PlatformsBundler allows the specification for gems to be installed only on certainRuby platforms. Options include different minor versions of Ruby (1.8, 1.9,2.0) as well as differences between types of Ruby (MRI Ruby, Rubinius, jRuby,various Windows Ruby versions).SourcesThere appear to be three source types supported by Bundler.RubyGemsA Bundler::Source::RubyGems object represents a RubyGems server. The defaultone is RubyGems.org. If you check your Gemfile, you’ll see a ‘source’ directivethat points to the URL for the gem server with http://www.rubygems.org/ as theURL.1.9.3p448 :044 > Bundler.definition.sources[1].class => Bundler::Source::Rubygems1.9.3p448 :038 > Bundler.definition.sources[1] => rubygems repository https://rubygems.org/1.9.3p448 :039 > Bundler.definition.sources[1].name => "rubygems repository https://rubygems.org/"1.9.3p448 :040 > Bundler.definition.sources[1].options => {"remotes"=>["https://rubygems.org/"]}1.9.3p448 :041 > Bundler.definition.sources[1].remotes => [#<URI::HTTPS:0x007fb59138ea20 URL:https://rubygems.org/>]1.9.3p448 :042 > Bundler.definition.sources[1].caches => [#<Pathname:/Users/redconfetti/Sites/annotate_gemfile/vendor/cache>, "/Users/redconfetti/.rvm/gems/ruby-1.9.3-p448@annotate_gemfile/cache", "/Users/redconfetti/.rvm/gems/ruby-1.9.3-p448@global/cache"]1.9.3p448 :043 > Bundler.definition.sources[1].dependency_names => ["rake", "annotate_gem", "bundler", "diff-lcs", "rspec-core", "rspec-expectations", "rspec-mocks", "rspec"]The ‘dependency_names’ are the names of gems which rely on this RubyGem source server.PathA Bundler::Source::Pathobject represents a local gem path. This source is fromthe local path of the Gemfile I’m developing. The ‘name’ appears to be the nameof the dependency that relies on the “path” source.1.9.3p448 :045 > Bundler.definition.sources[0].class => Bundler::Source::Path1.9.3p448 :030 > Bundler.definition.sources[0].name => "annotate_gemfile"1.9.3p448 :031 > Bundler.definition.sources[0].path => #<Pathname:.>1.9.3p448 :032 > Bundler.definition.sources[0].options => {"path"=>"."}1.9.3p448 :033 > Bundler.definition.sources[0] => source at .GitA Bundler::Source::Git object represents a Git repository that provides thesource code for a defined gem dependency.2.0.0p247 :018 > Bundler.definition.sources[0].class => Bundler::Source::Git2.0.0p247 :019 > Bundler.definition.sources[0].name => "bootstrap-sass"2.0.0p247 :020 > Bundler.definition.sources[0].uri => "https://github.com/thomas-mcdonald/bootstrap-sass.git"2.0.0p247 :021 > Bundler.definition.sources[0].ref => "master"2.0.0p247 :022 > Bundler.definition.sources[0].branch => nil2.0.0p247 :023 > Bundler.definition.sources[0].options => {"revision"=>"16da2ffb1fb98672f498b482c318db0cfb20a054", "uri"=>"https://github.com/thomas-mcdonald/bootstrap-sass.git"}2.0.0p247 :024 > Bundler.definition.sources[0].submodules => nil2.0.0p247 :025 > Bundler.definition.sources[0].version => nilAs you can see, the ‘name’ attribute defines the gem that is dependent on thisspecific Git repository.Ruby VersionYour Gemfile may include a specific Ruby version, which is expressed as aBundler::RubyVersion object in the definition. Herokurequires the Ruby version for projects you are deploying to their platform.# Gemfileruby '2.0.0'# Console2.0.0p247 :001 > Bundler.definition.ruby_version => #<Bundler::RubyVersion:0x007fe8c890a908 @version="2.0.0", @engine="ruby", @input_engine=nil, @engine_version="2.0.0">DependenciesThis is the meat of the Gemfile, the dependencies which are represented asBundler::Dependency objects.1.9.3p448 :003 > Bundler.definition.dependencies[0] => <Bundler::Dependency type=:runtime name="annotate_gem" requirements=">= 0">1.9.3p448 :004 > Bundler.definition.dependencies[1] => <Bundler::Dependency type=:development name="bundler" requirements="~> 1.3">1.9.3p448 :005 > Bundler.definition.dependencies[2] => <Bundler::Dependency type=:development name="rake" requirements=">= 0">1.9.3p448 :006 > Bundler.definition.dependencies[3] => <Bundler::Dependency type=:development name="rspec" requirements="~> 2.14.1">It appears that dependencies do not specify a source if they are provided from aRuby Gems server. Only if they are sourced from a Path or Git repository.# Dependencies from my gem in development1.9.3p448 :021 > Bundler.definition.dependencies[0] => <Bundler::Dependency type=:runtime name="annotate_gem" requirements=">= 0">1.9.3p448 :022 > Bundler.definition.dependencies[0].source => source at .1.9.3p448 :023 > Bundler.definition.dependencies[1] => <Bundler::Dependency type=:development name="bundler" requirements="~> 1.3">1.9.3p448 :024 > Bundler.definition.dependencies[1].source => nil# Bootstrap-Sass dependency from another project2.0.0p247 :022 > Bundler.definition.dependencies[10].class => Bundler::Dependency2.0.0p247 :023 > Bundler.definition.dependencies[10] => <Bundler::Dependency type=:runtime name="bootstrap-sass" requirements=">= 0">2.0.0p247 :024 > Bundler.definition.dependencies[10].groups => [:default]2.0.0p247 :025 > Bundler.definition.dependencies[10].source.class => Bundler::Source::Git"
} ,
{
"title" : "Yard Documentation",
"category" : "",
"tags" : "Yard",
"url" : "/2013/09/yard-documentation/",
"date" : "2013-09-01 00:50:47 -0400",
"content" : "Here are my own notes for using Yard to provide the Ruby API documentation andother notes for your application.InstallationFirst add the Yard gem to your Gemfile, preferably in the development group ifapplicable.group :development do # Yard # YARD is a Ruby Documentation tool # https://github.com/lsegal/yard gem "yard", "~> 0.8.7"endRunningYou can use Yard to generate documentation by just running ‘Yard’ from the rootof your application.$ yardFiles: 36Modules: 10 ( 10 undocumented)Classes: 26 ( 21 undocumented)Constants: 0 ( 0 undocumented)Methods: 140 ( 54 undocumented) 51.70% documentedYou can also run a server that updates dynamically as you add documentation.$ yard server>> YARD 0.8.7 documentation server at http://0.0.0.0:8808[2013-08-31 14:28:15] INFO WEBrick 1.3.1[2013-08-31 14:28:15] INFO ruby 2.0.0 (2013-06-27) [x86_64-darwin12.4.1][2013-08-31 14:28:15] INFO WEBrick::HTTPServer#start: pid=41901 port=8808ConfigurationYou can run the ‘yardoc’ command with options that cause it to parse certaindirectories for documentation. With Rails applications it appears that thisisn’t necessary. Rather than add options or flags after the yard command eachtime, you can configure a .yardopts file with the arguments you would normallyuse from the command line.Yard will make use of your README.md file as the index page for thedocumentation, but to include other files you could configure a .yardopts filelike so:-README.mdCHANGELOG.mdThis makes it possible for the CHANGELOG to show up under the ‘File List’ section.I prefer to have my own hierarchy of markdown files in /doc, with generateddocumentation in /doc/app. This way I can completed delete the doc/app folderwithout affecting my other markup files in the root of /doc.--output-dir doc/app-doc/DevelopmentTasks.mdCHANGELOG.mdREADME.mdHere is a good example of a more elaborately configured .yardopts file. Youcan also run ‘yardoc –help’ to discover other options to add to the file."
} ,
{
"title" : "Exploring Bundler Commands",
"category" : "",
"tags" : "rubygems, bundler, Thor",
"url" : "/2013/08/exploring-bundler-commands/",
"date" : "2013-08-30 04:11:49 -0400",
"content" : "You may be used to using ‘bundle install’ or ‘bundle exec’ often, but here aresome commands you might have forgotten about or never heard of.Bundle InitYou don’t have to create your own Gemfile manually for new Ruby based projects.Bundle Init creates a new one for you.$ bundle initWriting new Gemfile to /Users/myuser/Projects/hey_guys/Gemfile$ ls -la Gemfile-rw-r--r-- 1 myuser mygroup 64 Aug 29 17:40 GemfileBundle ConsoleFor purer Ruby projects, this is useful. Start an IRB session in the context ofthe current bundle.$ bundle consoleResolving dependencies...1.9.3p448 :001 > require 'rake' => true1.9.3p448 :002 > Rake => RakeBundle OpenAfter you have configured your default text editor, which could be Vim, Emacs,Textmate, or Sublime, you can use ‘bundle open’ to quickly open your editor withthe root directory for the gems source code loaded.$ bundle open rakeResolving dependencies...Bundle GemBundler can even help you get started with the development of a new gem.$ bundle gem smash_pumpkin create smash_pumpkin/Gemfile create smash_pumpkin/Rakefile create smash_pumpkin/LICENSE.txt create smash_pumpkin/README.md create smash_pumpkin/.gitignore create smash_pumpkin/smash_pumpkin.gemspec create smash_pumpkin/lib/smash_pumpkin.rb create smash_pumpkin/lib/smash_pumpkin/version.rbInitializating git repo in /Users/redconfetti/Sites/annotate_gemfile/smash_pumpkinBundle InjectBundle Inject is an undocumented feature added onNovember 29, 2012, in version 1.3.0.pre, implemented by Engine Yard likely fortheir own automation. This command allows you to add gems to your Gemfile fromthe command line. Great jorb Engine Yard!$ bundle injectbundle inject requires at least 2 arguments: "bundle inject GEM VERSION ...".Bundler defines this command, as well as the others, in Bundler::CLI. Thisfile defines the command line interface for bundle commands (bundle install,bundle update, bundle exec, bundle gem,etc) using Thor. Thor command supportstandard command line style options and flags.Here is an example of the default intended use of the command.$ bundle inject poltergeist 1.3.0Fetching gem metadata from https://rubygems.org/......Fetching gem metadata from https://rubygems.org/..Resolving dependencies...Added to Gemfile: poltergeist (= 1.3.0)The resulting definition added to my Gemfile was very descriptive.# Added at 2013-08-29 17:54:59 -0700 by my_user_name:gem 'poltergeist', '= 1.3.0'I explored the code and it doesn’t appear that there are options to includespecial arguments such as the branch or git repository. It does however supportmultiple gem argument sets like so:$ bundle inject poltergeist 1.3.0 pry 0.9.12.2Fetching gem metadata from https://rubygems.org/......Fetching gem metadata from https://rubygems.org/..Resolving dependencies...Added to Gemfile: poltergeist (= 1.3.0) pry (= 0.9.12.2)The result being:# Added at 2013-08-29 18:09:41 -0700 by redconfetti:gem 'poltergeist', '= 1.3.0'gem 'pry', '= 0.9.12.2'"
} ,
{
"title" : "Paperclip URL and Path",
"category" : "",
"tags" : "paperclip, s3",
"url" : "/2013/08/paperclip-url-and-path/",
"date" : "2013-08-28 23:04:13 -0400",
"content" : "I’m trying to configure my application so that it stores files in S3 by defaultwhen my application is running in the Production Rails environment, with localfile storage and a customized file path for development and test environments.Here is my configuration.Paperclip DefaultsYou can view the default options by opening the Rails console and inspectingPaperclip::Attachment.default_options. I’m using Paperclip 3.5.1.pry(main)> Paperclip::Attachment.default_options=> {:convert_options=>{}, :default_style=>:original, :default_url=>"/:attachment/:style/missing.png", :escape_url=>true, :restricted_characters=>/[&amp;$+,\/:;=?@<>\[\]\{\}\|\\\^~%# ]/, :filename_cleaner=>nil, :hash_data=>":class/:attachment/:id/:style/:updated_at", :hash_digest=>"SHA1", :interpolator=>Paperclip::Interpolations, :only_process=>[], :path=>":rails_root/public:url", :preserve_files=>false, :processors=>[:thumbnail], :source_file_options=>{}, :storage=>:filesystem, :styles=>{}, :url=>"/system/:class/:attachment/:id_partition/:style/:filename", :url_generator=>Paperclip::UrlGenerator, :use_default_time_zone=>true, :use_timestamp=>true, :whiny=>true, :check_validity_before_processing=>true}Overriding DefaultsYou can override the defaults in config/initializers/paperclip.rb using thefollowing example which configures Paperclip to use S3 in production, with theImageMagick path we use on the production server (running Ubuntu instead ofMacOSX):if Rails.env.production? # See http://rubydoc.info/gems/paperclip/Paperclip/Storage/S3 Paperclip.options[:image_magick_path] = "/usr/bin/" Paperclip::Attachment.default_options.merge!({ :storage => :s3, :s3_credentials => "#{Rails.root}/config/s3.yml", :bucket => YAML.load_file("#{Rails.root}/config/s3.yml")[Rails.env]['bucket'], :url => ":s3_domain_url" })endHere is my configuration for the local dev/test environments:# Config for Non-production Environmentsunless Rails.env.production? Paperclip::Attachment.default_options.merge!({ :url => "/system/:rails_env/:class/:attachment/:id_partition/:style/:filename", :path => ":rails_root/public:url" })endRemember that the :path and :url have to align so that the file path and URLpath match and the file is served by your web server. It’s best to modify thepath only if the local filesystem prefix is different than the ‘public’ folderin the root of your Rails application directory."
} ,
{
"title" : "Open Source Ideas",
"category" : "",
"tags" : "",
"url" : "/2013/08/open-source-ideas/",
"date" : "2013-08-26 01:47:39 -0400",
"content" : "I’ve become aware of how important it is to provide a portfolio of things toshare with prospective employers. Having a Github profile that shows off allthe Gists and public repositories you’ve contributed to serves as a perfectportfolio piece.Most of the work I’ve done has been on private systems however, and I’vediscarded the code, because I don’t own it and don’t want legal trouble. So whatcan I do that is practical, helpful, and/or fun!?Here is my list of pet project ideas that I plan on doing in the future. Pop Culture Faker - There is a tool that outputs fake names, business names,phone numbers, addresses, etc. called Faker. Often I find myself using nameslike ‘Bob Barker’, or ‘Joffrey Baratheon’ in my tests. It would be cool tomake an alternative version of Faker (or FFaker) that pulls from commonnames of celebrities, fictional characters, etc. Gemfile Annotator - I remember hearing about a gem that wouldannotate your Rails models. I want to make one that adds the name,description, and Github project URL for each gem inside of your Gemfile,like so: # Figaro# Simple Rails app configuration# https://github.com/laserlemon/figarogem "figaro" "
} ,
{
"title" : "Precompiling Rails 4 Assets When Deploying to Heroku",
"category" : "",
"tags" : "heroku, rails4, asset pipeline",
"url" : "/2013/08/precompiling-rails4-assets-when-deploying-to-heroku/",
"date" : "2013-08-25 03:13:01 -0400",
"content" : "I’m working on a Rails 4.0.0 application, using Ruby 2.0.0 for a code challengeI’m working on. Part of this challenge is to deploy my application to Heroku. Ihaven’t done this before, as I’m accustomed to deploying to a VPS withCapistrano.Upon my first deploy I discovered that my assets weren’t compiling. It wasn’teven mentioned in the output while deploying. I reviewed the Heroku article onthe Rails asset pipeline, but this didn’t offer me any details to resolve myissue.I discovered that I should add the rails_12factor gem to my production gemgroup in the Gemfile. Here’s a pretty annotated version that I used.# Productiongroup :production do # Rails 12factor # Makes running your Rails app easier. Based on the ideas behind 12factor.net # Needed for support of Asset Pipeline with Heroku # https://github.com/heroku/rails_12factor gem 'rails_12factor'endAfter doing this I ran the rake task to precompile assets, committed the changesto my repository, then deployed. This resolved the issue, at least in the shortterm. Having to precompile assets before each deploy isn’t very efficient though,so I ran the task to clobber all the assets. If you don’t clear all theprecompiled assets using ‘clobber’, Heroku will not attempt to precompile theassets.rake assets:precompilerake assets:clobberI tried many other asset pipeline settings in config/application.rb andconfig/environments/production.rb that were recommended in variousStackOverflow threads, but nothing was working. I don’t recall when, but atsome point I finally started to see that Heroku was trying to precompile assetswhen I deployed. Your bundle is complete! It was installed into ./vendor/bundle Cleaning up the bundler cache.-----> Writing config/database.yml to read from DATABASE_URL Error detecting the assets:precompile task-----> Discovering process typesI ran the command to detect the rake tasks on the remote server, and‘assets:precompile’ was showing up. I didn’t understand why it’s saying that itcan’t detect the task when it’s right there on the remote server.$ heroku run rake -TRunning `rake -T` attached to terminal... up, run.1071rake about # List versions of all Rails frameworks and the environmentrake assets:clean # Remove old compiled assetsrake assets:clobber # Remove compiled assetsrake assets:environment # Load asset compile environmentrake assets:precompile # Compile all the assets named in config.assets.precompileAfter further investigating I found this post. It turns out that if yourapplications configuration variables need to be present during the compilationof the “slug” being compiled, an add-on called user-env-compile might helpyour application during deployment.I’m using Figaro to load application configuration values, including the assethostname.# config/application.rbconfig.action_controller.asset_host = 'http://' + Figaro.env.hostnameThis must be the reason it was failing to attempt the precompiling of assetsduring deployment.heroku labs:enable user-env-compile -a myappI installed user-env-compile and now it’s working just fine."
} ,
{
"title" : "Resetting Paths for Homebrew",
"category" : "",
"tags" : "homebrew, command line, cmdline",
"url" : "/2013/08/resetting-paths-for-homebrew/",
"date" : "2013-08-22 22:31:38 -0400",
"content" : "I recently needed to install a program on my Mac using Homebrew. I wasinstructed to run ‘brew update’, and then the ‘brew doctor’ command whichresulted in this message:Warning: /usr/bin occurs before /usr/local/binThis means that system-provided programs will be used instead of thoseprovided by Homebrew. The following tools exist at both paths: gcov-4.2 git git-cvsserver git-receive-pack git-shell git-upload-archive git-upload-packConsider amending your PATH so that /usr/local/binoccurs before /usr/bin in your PATH.I am using Zsh instead of Bash, and checked my .bashrc, .bash_profile, .zshenv,and .zshrc files. None of those expressed the path with the /usr/bin pathexpressed before the /usr/local/bin.I also noticed that when using ‘echo $PATH’ the paths were being duplicated. Isaw that in my .zshrc I was setting the path in the correct order, but somethingelse was setting the paths in the incorrect order…and taking precendence.It turns out that there is a file - /etc/paths - which controls the defaultpaths for all users on the system. I used ‘sudo nano /etc/paths’ to edit myconfiguration to reflect the following:/usr/local/bin/usr/bin/bin/usr/sbin/sbinI opened a new terminal and ran ‘brew doctor’ again.$ brew doctorYour system is ready to brew."
} ,
{
"title" : "Time Management",
"category" : "",
"tags" : "time management",
"url" : "/2013/07/time-management/",
"date" : "2013-07-30 08:25:04 -0400",
"content" : "I’ve recently became aware of a time management technique known as thePomodoro Technique. You timea period of work for 25 minutes, then take a short break, then do another periodagain. This helps you gauge the amount of work you’re getting done in a periodof time, and is supposed to help with mental agility.My friend uses Vitamin R as an app onhis Mac to remind him of how much time he has left during a time period."
} ,
{
"title" : "Ruby Strftime",
"category" : "",
"tags" : "dates, times",
"url" : "/2013/07/ruby-strftime/",
"date" : "2013-07-27 01:16:42 -0400",
"content" : "Instead of piecing together Ruby strftime strings to use for various formatseach time, I’m making this post to store common variations for me to reference later.I used the legend posted by annaswims onApiDock.com topiece these together. Thanks Anna.# Pretty AbbreviatedTime.now.strftime("%a %b %d, %Y %l:%M:%S %p %Z") # => "Fri Jul 26, 2013 3:06:04 PM PDT"# Pretty LongTime.now.strftime("%A %B %d, %Y %l:%M:%S %p %Z") # => "Friday July 26, 2013 3:06:53 PM PDT"# Short but HumanTime.now.strftime("%-m/%d/%y - %-l:%M:%S %p %Z") # => "7/26/13 - 3:10:15 PM PDT"# Logger StyleTime.now.strftime("%m/%d/%y - %H:%M:%S %Z") # "07/26/13 - 15:13:53 PDT"# ISO8601 formatTime.now.utc.strftime('%FT%H:%MZ') # => "2013-07-26T22:15Z"# DateTime format used with ActiveRecordTime.now.utc.strftime('%F %H:%M:%S') # => "2013-07-26 22:19:09""
} ,
{
"title" : "Uptime Monitoring and Alerts",
"category" : "",
"tags" : "",
"url" : "/2013/07/uptime-monitoring-and-alerts/",
"date" : "2013-07-26 03:34:25 -0400",
"content" : "Just happened to hear about these solutions recently. Pingdom PagerDuty"
} ,
{
"title" : "Installing Rails 3.2.13",
"category" : "",
"tags" : "Rails 3",
"url" : "/2013/07/installing-rails-3-2-13/",
"date" : "2013-07-16 06:00:40 -0400",
"content" : "Rails 4 is out now, and installs by default. You might need to install Rails 3for a project. This is how you do it.gem install --version '3.2.13' rails"
} ,
{
"title" : "POW RVM ZSH",
"category" : "",
"tags" : "rvm, pow, zsh",
"url" : "/2013/07/pow-rvm-zsh/",
"date" : "2013-07-15 03:07:38 -0400",
"content" : "I’m using a Rails 3 app, and my colleague updated the RVM config to use Ruby2.0.0. I was having issues getting POW to work with the app. I’m using ZSH as myshell also.The following command resolved my issue.rvm env . -- --env > .powenvProps to Linus on StackOverflow for this solution."
} ,
{
"title" : "Devise_For with Skip",
"category" : "",
"tags" : "devise",
"url" : "/2013/07/devise_for-with-skip/",
"date" : "2013-07-12 01:26:51 -0400",
"content" : "I just stumbled upon the options for devise_for which let you auto-generate theroutes that are needed for a certain devise resource (user), with certaincategories of routes skipped.For example, if I want to define routes for my User, I can define:devise_for :usersThis results in the routes for these three categories: Sessions - Sign in, Sign out Passwords - Password reset options Registrations - Creating new user, updating existing user, or destroying youruser accountYou can leave one of these categories out of the route definition by using skip.For instance if you want only the Sign-in and Sign-out options, you could definethis in your routes.rb file:devise_for :users, :skip => [:registrations, :passwords]"
} ,
{
"title" : "Project / Task Management Applications",
"category" : "",
"tags" : "planning, analysis, project management",
"url" : "/2013/07/project-task-management-applications/",
"date" : "2013-07-08 02:11:31 -0400",
"content" : "I’ve worked on various projects that used various task management applicationshosted in the cloud (software as a service). I hear about new ones every sooften, so I decided to note them here for future reference. Pivotal Tracker - Anyone trying to adopt theagile / scrum method of development has likely used this. BaseCampHQ.com - This is the first Rails application.The reason Ruby on Rails exists. Simple and elegant. Wrike.com - Very flexible. Can be used to multiplepeople, in different organizations, matching any special hierarchy of tasks. PlanScope.io Asana.com"
} ,
{
"title" : "Refinery Extension Not Named After Model",
"category" : "",
"tags" : "refinery-cms",
"url" : "/2013/06/refinery-extension-not-named-after-model/",
"date" : "2013-06-18 09:01:25 -0400",
"content" : "A project I’m working on currently relies on Refinery CMS to administrate thepages. Instead of building our own separate admin area for our own custommodels, we’re continuing to use Refinery for our non-page models as well.Refinery and it’s extensions are generated under their own namespace to ensurethat they play nice with any system you install Refinery into. It can providepage management inside of an existing Rails app you have, or it can act as thecenter of the entire website. I’m pretty sure the main concept is that it leavesthe view/layout presentation up to you, but provides the page administrationback-end for you.If you have special models that you want Refinery to manage as well, you cangenerate a Refinery extension, which is essentially a Rails engine createdunder vendor/extensions. Something that I really didn’t like however was howevery new model you want to add gets it’s own extension folder. Since theextensions are namespaced, you could add a new extension called ‘book’ and endup with the model undervendor/extensions/books/app/models/refinery/book/book.rb, and defined asRefinery::Books::Book.I found out later though that you can addnew resources to existing extensions. Now instead of having a largetree/hierarchy of files displayed in my project file view, it can be compactedunder one single folder, with all the interrelated models under the samenamespace. I can see that this same paradigm applies to theRefinery-Blog plugin with posts, comments, and categories configured under thesame extension.Extension with Independent NameOne of the things that bothered me here is that you cannot generate an extensionusing a name not associated with a model. For instance, the Refinery-Blog pluginmentioned above is called ‘Refinery-Blog’, but there is no ‘blog’ model. Thereare just posts, comments, categories, etc. The plugin is configured with thepost admin page being the defined URL for the plugin itself, as a ‘post’ is thecenterpiece of a ‘blog’.For someone that isn’t familiar with the logic behind the extension codegenerated, this can be frustrating.I’ve found the following method to accomplish this using the generator however.First, create a folder for your new extension. For this example we’ll call theextension ‘shapes’, with ‘square’ being first resource we plan on adding to theextension. You’ll have to create and populate certain files so that thegenerator can modify the existing files as it’s designed to do. Do replace‘shapes’ in these examples with the name of your extension where applicable. Create folder ‘vendor/extensions/shapes/’ Create folder ‘vendor/extensions/shapes/config’ Create file ‘vendor/extensions/shapes/config/routes.rb’ Create file ‘vendor/extensions/shapes/lib/refinerycms-shapes.rb’Make sure your config/routes.rb file has the following inside it.Refinery::Core::Engine.routes.append do # engine logic goes hereendAfter this is done, I ran the following command to generate my first resourcewithin the extension I created by hand.rails g refinery:engine square size:integer --extension shapes --namespace shapesIt created the new resource without any errors. Now I can re-creating theresources that were previously created under their own extensions, so thatthey’re all under my new extension. I expect this will be cleaner, and moreappropriate to how they should be added."
} ,
{
"title" : "Splitting a Branch with Git",
"category" : "",
"tags" : "git",
"url" : "/2013/06/splitting-a-branch-with-git/",
"date" : "2013-06-05 22:18:38 -0400",
"content" : "There are times that a task you are working on results in an extremely hugeamount of changes. Although you may have been careful, and tested eachmodification out well, there is always a possibility that something will causean issue in production. If your branch contains modifications that can bereleased in separately, without interdependencies, it’s a good idea to splitthe feature branch into separate releases.First you’ll want to interactively rebase your branch, squash all commits intoa single commit, and then amend the remaining commit so that it’s the mostrecent.git checkout my_feature_branchgit fetchgit rebase -i origin/mastergit commit --amend --reset-authorYou can confirm that your last commit which contains all the changes you’veprovided in your feature branch is the last one using ‘git log’.Next, create a new branch from your rebased feature branch using a name thatdescribes the first portion of modifications you’re wanting to split off fromyour finished feature branch.git checkout -b new_comments_and_docsThen reset your branch to the commit that comes before your squashed commit.This is practically the state of the last commit in the master branch that yourebased from.git reset HEAD^If you run ‘git status’ now, you’ll see the list of unstaged/modified files, anduntracked/new files that contain your work from this branch. It’s a good idea totake this list of files and separate them into groups for the split branches youplan on creating, using ‘git diff’ on the modified files to review the changesyou made. This will help you avoid mistakenly forgetting to include certainfiles during the process.Once you’re ready, simply use ‘git add’ on the files that contain the changesyou wish to keep in the current split from your feature branch.After you’ve added the files/modifications you wish to keep in this branch, andcommitted them, run the following commands to clear remaining modifications anduntracked files.git checkout *git clean -fYou now have a split version of your feature branch. Checkout your featurebranch and perform the steps above for the other changes you wish to split intoseparate branches."
} ,
{
"title" : "Application Builders",
"category" : "",
"tags" : "builder",
"url" : "/2013/06/application-builders/",
"date" : "2013-06-05 10:11:12 -0400",
"content" : "Everytime I setup a new Rails application I have to go through the configurationand change many things. It’s as if there is a specific configuration that Iprefer. For instance, I like using the Twitter Bootstrap framework for myfront-end…at least just to get started. I like to use Rspec and Cucumber fortesting. The list goes on.I just stumbled upon this argument that the ‘rails’ executable provides whengenerating a new Rails application.$ rails --helpUsage: rails new APP_PATH [options]Options: -r, [--ruby=PATH] # Path to the Ruby binary -b, [--builder=BUILDER] # Path to a application builder (can be a filesystem path or URL)It appears that I can configure many options in some sort of file, hosted in aGist file under my Github account, which I can use for each new project I begin.Awesome! I’ll have to explore this later, but for now here is a good article onthe subject:Rails 3 Application BuildersI found that this article was from 2010, so documentation is likely a littlebetter out there. Then I stumbled onto this Rails Composer which asks youquestions to help you setup a new Rails application. It’s like a Railsgenerator on steroids.While I was trying to follow it’s instructions to run the generator, I keptgetting an SSL error like so: apply https://raw.github.com/RailsApps/rails-composer/master/composer.rb/Users/jsmith/.rvm/rubies/ruby-1.9.3-p327/lib/ruby/1.9.1/net/http.rb:799:in `connect': SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed (OpenSSL::SSL::SSLError) from /Users/jsmith/.rvm/rubies/ruby-1.9.3-p327/lib/ruby/1.9.1/net/http.rb:799:in `block in connect' from /Users/jsmith/.rvm/rubies/ruby-1.9.3-p327/lib/ruby/1.9.1/timeout.rb:54:in `timeout' from /Users/jsmith/.rvm/rubies/ruby-1.9.3-p327/lib/ruby/1.9.1/timeout.rb:99:in `timeout'I found the recommendation to run the following to install a CURL CA bundle.After installing this I added the export command to my .zshrc file (zshequivalent of the .bashrc file).brew install curl-ca-bundleAfter I did this, then opened a new terminal window, the generator worked justfine.rails new myapp -m https://raw.github.com/RailsApps/rails-composer/master/composer.rb -T -O"
} ,
{
"title" : "Technical Debt",
"category" : "",
"tags" : "technical debt",
"url" : "/2013/06/technical-debt/",
"date" : "2013-06-03 21:43:05 -0400",
"content" : "If I were to pick one area of advice for any aspiring business men seeking theirMaster of Business Administration degree, with the intent of working for acompany that involves developers, I recommend that you explore this concept. Ifyou’re unaware of this possibility, your project might eventually end updrastically behind.Technical Debt"
} ,
{
"title" : "Why Ruby Was Named After a Gemstone",
"category" : "",
"tags" : "ruby, perl",
"url" : "/2013/06/why-ruby-was-named-after-a-gemstone/",
"date" : "2013-06-03 19:54:30 -0400",
"content" : "Matz: Ruby is named after the precious gemstone, it’s not anabbreviation of anything. When I started the language project, I was joking witha friend that the project must be code-named after a gemstone’s name(àla Perl). So my friend came up with “ruby”. It’s a short name for abeautiful and highly valued stone. So I picked up that name, and it eventuallybecame the official name of the language.Taken from An Interview with the Creator of Ruby by Bruce Stewart"
} ,
{
"title" : "Downloadable Documentation",
"category" : "",
"tags" : "documentation",
"url" : "/2013/05/downloadable-documentation/",
"date" : "2013-05-30 21:20:00 -0400",
"content" : "Some people want to download the documentation for the languages they’re using.This is needed when an internet connection isn’t available (like using a laptopon a plane), or even for the sake of speed.Here are two sources of downloadable documentation. Ruby on Rails Documentation Ruby documentation"
} ,
{
"title" : "History of Internationalization in Software",
"category" : "",
"tags" : "unicode",
"url" : "/2013/05/history-of-internationalization-in-software/",
"date" : "2013-05-29 21:09:34 -0400",
"content" : "Here are two articles that were recommended by co-workers today. The Absolute Minimum Every Software Developer Absolutely, Positively MustKnow About Unicode and Character Sets (No Excuses!) - by Joel Spolsky History of Character Encoding"
} ,
{
"title" : "Mobile Application Performance Monitoring and Management",
"category" : "",
"tags" : "mobile",
"url" : "/2013/05/mobile-application-performance-monitoring-and-management/",
"date" : "2013-05-28 22:45:57 -0400",
"content" : "It seems like the industry is all astir about mobile these days. I’m thinking ofjumping into such interests.Heard about this today. It’s like New Relic for mobile apps.https://www.crittercism.com/"
} ,
{
"title" : "Use Ruby to Develop iOS or Mac OSX",
"category" : "",
"tags" : "ios, mac-osx",
"url" : "/2013/05/use-ruby-to-develop-ios-or-mac-osx/",
"date" : "2013-05-22 04:53:31 -0400",
"content" : "I haven’t evaluated this yet, but another developer at my local Ruby meetupgroup said that this has been used in production and fairs pretty well.“RubyMotion is a revolutionary toolchain that lets you quickly develop and testnative iOS and OS X applications for iPhone, iPad and Mac, all using the awesomeRuby language you know and love.”"
} ,
{
"title" : "Uninstalling Command Line Tools for Xcode",
"category" : "",
"tags" : "xcode",
"url" : "/2013/05/uninstalling-command-line-tools-for-xcode/",
"date" : "2013-05-20 23:51:42 -0400",
"content" : "I ran into a problem trying to install Ruby 2.0.0 via RVM over the weekend.When I got to work the next day and needed to do work using Ruby 1.8.7, I raninto issues. This led to updating RVM using ‘rvm get stable’, and then trying toreinstall Ruby 1.8.7.I was then prompted that the version of Xcode I’m using is older, and thus needsto be updated. I tried to look for the command to uninstall it, just to ensurethat the new installation (possibly via the App Store) is clean. I found thiscommand to work for Mac OSX 10.8.3 (Mountain Lion).sudo /Library/Developer/4.1/uninstall-devtools -mode=allIf you have a much older version of XCode, the command might be:sudo /Developer/Library/uninstall-devtools --mode=allAlso, you might have a full version of XCode installed under/Applications/Xcode.app/, in which case you’ll want to simply drag and dropit into your trash to uninstall it. I did this and RVM finally indicated thatthere was no remaining outdated version of XCode. As you can see it defaults bychecking in the /Applications/Xcode.app/ folder for developer tools.Installing requirements for osx, might require sudo password.Error: No developer directory found at /Applications/Xcode.app/Contents/Developer. Run /usr/bin/xcode-select to update the developer directory path.Strange thing is that for me, even after reinstalling the Command Line Tools forXCode 4.6.2 and then restarting my machine, I still am receiving this error:Error: No developer directory found at /Applications/Xcode.app/Contents/Developer. Run /usr/bin/xcode-select to update the developer directory path.Already up-to-date.I’m not really sure what the new path is. I’ve found that it’s best to simplyinstall the full Xcode, so I’ve done that. No further errors regarding thedeveloper directory path."
} ,
{
"title" : "Setting Rspec as the Default",
"category" : "",
"tags" : "generators, workflow",
"url" : "/2013/05/setting-rspec-as-the-default/",
"date" : "2013-05-19 08:19:49 -0400",
"content" : "When setting up a new Rails application you’ll likely want to make Rspec thedefault test framework for new models that are generated with scaffolding. Thisis usually handled by default by the Rspec gem after you install it. It’spossible to explicitly set this however, as well as other configurations for generators.This is explained in more details in the Customizing Your Workflow section ofthe Rails generator documentation. You even have the option of turning off thegeneration of stylesheets if preferred."
} ,
{
"title" : "Remote Pair Programming",
"category" : "",
"tags" : "pair programming, remote pair programming, coding presentation",
"url" : "/2013/04/remote-pair-programming/",
"date" : "2013-04-22 20:48:59 -0400",
"content" : "I really find pair programming to be annoying. It seems like a waste of time ascompared to doing peer code review over a Git branch that has been squashed.But I really can’t knock something totally unless I’ve tried it for a while.Some people shared these resources today which I wanted to archive here. Atleast with remote pair programming you still get to have control of yourcomputer, instead of sitting aside and watching idly. Here are some remotecollaborative coding tools. These could also be useful for people wanting toremotely teach others how to code certain things. SubEthaEdit Sublime Collaboration -Plugin for the Sublime Text Editor Mad Eye - Web based collaborative coding tool"
} ,
{
"title" : "Languages Supported by Github Flavored Markdown",
"category" : "",
"tags" : "yardoc, github, markdown",
"url" : "/2013/04/languages-supported-by-github-flavored-markdown/",
"date" : "2013-04-12 23:57:57 -0400",
"content" : "NOTE: This post updated on 12/08/2020I’m currently configuring the Yard documentation tool for use withRuby/Rails projects. I could see that it’s possible to create a.yardopts file in the main directory for your Rails application, andsimply add command line arguments to the file.I just discovered that you can add a list of files, likely placed under the‘doc’ directory, to your .yardopts file, and those files will be included inyour generated documentation set. This is perfect for changelogs, readmefiles, and other high level documentation. After installing the Redcarpet gem,you can name these files with the ‘.md’ extension to useMarkdown formatting on your documentation.After further investigation I discovered that Yard supportsGithub Flavored Markdown, with support for syntax highlighting of anumber of different languages. This is accomplished by wrapping your code withlines that consist of three backticks, with the first line suffixed by thelanguage name.```rubythis = "Ruby Code"puts "This is #{this}"```Unfortunately the Github docs refer you to this hightlight.js test pagefor the list of supported languages.Github uses Linguist to perform language detection and syntax highlighting.Here a list of common languages that can be used with the backtick (seefull list in Linguist - languages.yml). actionscript3 apache applescript asp brainfuck c cfm clojure cmake coffee-script, coffeescript, coffee cpp - C++ cs csharp css csv bash diff elixir erb - HTML + Embedded Ruby go haml http java javascript json jsx less lolcode make - Makefile markdown matlab nginx objectivec pascal PHP Perl python profile - python profiler output rust salt, saltstate - Salt shell, sh, zsh, bash - Shell scripting scss sql svg swift rb, jruby, ruby - Ruby smalltalk vim, viml - Vim Script volt vhdl vue xml - XML and also used for HTML with inline CSS and Javascript yaml"
} ,
{
"title" : "Coding Games",
"category" : "",
"tags" : "javascript, code games",
"url" : "/2013/04/coding-games/",
"date" : "2013-04-11 23:30:59 -0400",
"content" : "A few weeks ago I heard about this FightCode website that lets youprogram robots that compete against other coders who code their own robots. Ituses Javascript, so it should be accessible to many web developers. They evenhave a Facebook page you can like.Recently a co-worker also mentioned a new programming game called NodeWar.It appears to be under development still, but could prove to be awesome."
} ,
{
"title" : "Customize your IRB",
"category" : "",
"tags" : "irb",
"url" : "/2013/03/customize-your-irb/",
"date" : "2013-03-22 20:36:54 -0400",
"content" : "Stephen Ball of RakeRoutes.com has a cool post on how to customize your IRBenvironment.Customize your IRB"
} ,
{
"title" : "htaccess tester",
"category" : "",
"tags" : "htaccess",
"url" : "/2013/03/htaccess-tester/",
"date" : "2013-03-01 11:43:17 -0500",
"content" : "I’m not sure how one would use this, but it looks like it’s supposed to beuseful.[http://htaccess.madewithlove.be/]"
} ,
{
"title" : "Using Find Each to Process Batches",
"category" : "",
"tags" : "batch processing",
"url" : "/2013/02/using-find-each-to-process-batches/",
"date" : "2013-02-28 23:49:14 -0500",
"content" : "I just found out that there is a find_each method provided byActiveRecord which loops through an array of models that are retrieved inbatches of 1000 at a time.The find is performed by find_in_batches with a batch size of 1000 (or asspecified by the :batch_size option).User.find_each(:start => 2000, :batch_size => 5000) do |user| user.do_somethingend"
} ,
{
"title" : "Development Time",
"category" : "",
"tags" : "time, estimate",
"url" : "/2013/02/development-time/",
"date" : "2013-02-28 05:02:05 -0500",
"content" : "It seems common that development tasks or projects take much more time thanexpected. My manager recently pointed these out to me:The 90/90 Rule - “The first 90 percent of the code accounts for the first90 percent of the development time. The remaining 10 percent of the codeaccounts for the other 90 percent of the development time.”— Tom Cargill, Bell LabsHofstadter’s Law - “Hofstadter’s Law: It always takes longer than youexpect, even when you take into account Hofstadter’s Law.”— Douglas Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid"
} ,
{
"title" : "Minecraft Mods",
"category" : "",
"tags" : "minecraft, mods, craftbukkit, console",
"url" : "/2013/02/minecraft-mods/",
"date" : "2013-02-10 05:30:53 -0500",
"content" : "I’ve been playing Minecraft for a little while now.Want to see what else this thing can do, so I want to get access to consolecommands. This is a challenge it seems, so I’m going to document how it’s donehere.I’m currently using Minecraft version 1.4.7. I just downloaded the latestbuild/snapshot (possibly unstable yet likely compatible) fromMinecraft-Console by simo415 on Github. A prerequisite to this modrunning is modloader, which states that it’s only for version 1.4.7.I’m using a Mac. I downloaded the modloader source files, then followedthe instructions to unpack the contents of the minecraft JAR file into atemporary directory.cd ~mkdir mctmpcd mctmpjar xf ~/Library/Application\ Support/minecraft/bin/minecraft.jarNext I copied the contents of the modloader source files into the temporarydirectory, overwriting all files. I then ran the following to remove somefiles and repackage the minecraft JAR file.rm META-INF/MOJANG_C.*jar uf ~/Library/Application\ Support/minecraft/bin/minecraft.jar ./cd ..rm -rf mctmpIt wasn’t clear how I install the Minecraft Console mod after performing this,but another website helped me see that I simply needed to drop theMinecraft_Console_Snapshot.zip file into~/Library/Application Support/minecraft/mods, which I renamed as‘Minecraft_Console.zip’ and restarted the program. I could see that in thatdirectory a ‘console’ folder showed up with configuration files and logs, so Iknew the plugin was loading.The ‘GuiAPI’ plugin is optional, and it required some sort of ‘Forge’ softwareto be present, so I decided not to explore this. I obtained theSinglePlayerCommands-MC1.4.7_V4.5.jar and ran the file, which presented aninstaller that auto-detected where to install the files. I did this andrestarted the program, but the command’s don’t seem to work. I really wantedthis because it seems to provide commands that are very helpful from theperspective of a single player… like reporting where my position is using‘//pos’ (which didn’t work).With Minecraft Console installed, I can use the backslash key to bring up theconsole, however I find that I can simply use the t command to talk likenormal…and this still respects commands starting with forward slash such as/time set 0. I also found that the forward slash key also brought up a nicelooking console that was large yet less intrusive to the game.The time set command didn’t work however, but instead responded with “Youdon’t have permission to set the time”. I connected to my server usingRemoteBukkit and used ‘/op myusername’ to give myself Op privileges. Afterdoing this I was able to set the time, and other commands."
} ,
{
"title" : "Obtain MySQL Query Statistics using Explain",
"category" : "",
"tags" : "mysql, explain",
"url" : "/2013/02/obtain-mysql-query-statistics-using-explain/",
"date" : "2013-02-07 03:56:31 -0500",
"content" : "Sometimes it really counts to restructure the queries made to your MySQLdatabase, especially so that they do make use of indexes which are present onthe table.You can obtain information on which keys are being used with a query by usingthe EXPLAIN statement before your SELECT statement. Here are someexamples of it’s output.mysql> EXPLAIN SELECT * FROM USERS WHERE area_id = 2;+----+-------------+---------+------+---------------+------+---------+------+------+-------------+| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |+----+-------------+---------+------+---------------+------+---------+------+------+-------------+| 1 | SIMPLE | USERS | ALL | NULL | NULL | NULL | NULL | 6 | Using where |+----+-------------+---------+------+---------------+------+---------+------+------+-------------+1 row in set (0.03 sec)mysql> EXPLAIN SELECT * FROM USERS WHERE group_id = 16515319;+----+-------------+---------+------+---------------------------+--------------------------+---------+-------+------+-------+| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |+----+-------------+---------+------+---------------------------+--------------------------+---------+-------+------+-------+| 1 | SIMPLE | USERS | ref | index_users_on_group_id | index_users_on_group_id | 4 | const | 1 | |+----+-------------+---------+------+---------------------------+--------------------------+---------+-------+------+-------+1 row in set (0.00 sec)"
} ,
{
"title" : "Git Branching Model",
"category" : "",
"tags" : "",
"url" : "/2013/02/git-branching-model/",
"date" : "2013-02-06 22:57:45 -0500",
"content" : "I just want to put this here for future reference.There is a version control branching model known as Git-Flow, which is verysimilar to the model used on the team I work with. SeeA Successful Git Branching Model. This seems to work well for teams thatmake several separate commits to the ‘develop’ branch, with differentversioned releases provided to the ‘release branch’ that may or may not havebeen tested and put through a quality assurance process, and finally onlymajor updates (not releases) merged into the ‘master’ branch and tagged withthe appropriate version number.My team doesn’t make small updates here and there on the develop branch. Wehave our own feature branches, which have all the small micro commits made tothem locally with whatever notes we choose to make for each. Once we’re donewith our work, we squash the commit into a single monolithic commit using ‘gitrebase’ then merge this into the ‘develop’ branch. This squash makes theprocess of reviewing the changes with the lead developer easier to do usingGitX.To squash we do a rebase with the remote ‘develop’ branch.git rebase -i origin/developAfter running this rebase command, our configured text editor opens. All ofthe commits that are not part of the ‘develop’ branch are displayed with‘pick’ shown before them, in order of oldest to newest commit.pick 2df148b combined commentspick 32e2471 Added description to rake taskspick 17ffc55 updated comment for config taskpick f0c4c6a added descriptive comments# Rebase 3012af2..f0c4c6a onto 3012af2## Commands:# p, pick = use commit# r, reword = use commit, but edit the commit message# e, edit = use commit, but stop for amending# s, squash = use commit, but meld into previous commit# f, fixup = like "squash", but discard this commit's log message# x, exec = run command (the rest of the line) using shell## These lines can be re-ordered; they are executed from top to bottom.## If you remove a line here THAT COMMIT WILL BE LOST.## However, if you remove everything, the rebase will be aborted.## Note that empty commits are commented outTo squash everything together we simply replace ‘pick’ with ‘s’ (or ‘squash’)for all but the first of the commits. After saving and closing the file, therebase process continues, various conflicts must be resolved and committedbefore using ‘git rebase –continue’. Once all the commits are squashedtogether with no further conflicts, the text editor pops open again and we areprompted to create a single commit comment.After this is complete, we merge our feature branch with it’s single commitinto the ‘develop’ branch and push remotely. Next we merge this into the‘release’ branch, then into the ‘master’ branch. This is kind of silly, reallyfor our team with this process we only need a ‘master’ branch. Our featurebranches go through the Q&A and code review process with the leaddeveloper, so really ‘master’ is all that’s necessary.So in effect, what we do is closer to the Github-Flow. Our lead developerprovided me with this link and we’re switching to this model soon, cutting outthe ‘release’ and ‘develop’ branches. Here is a slideshow by Zach Holman ofGithub elaborating further - How GitHub uses GitHub to Build GitHub."
} ,
{
"title" : "Referencing Gem Source Code",
"category" : "",
"tags" : "gem, gem source, unpack",
"url" : "/2013/02/referencing-gem-source-code/",
"date" : "2013-02-06 08:11:46 -0500",
"content" : "It’s often difficult to work with Ruby Gems that your Rails applicationdepends on because the source code for the gem itself is packed away in a gemdirectory. I’ve often found myself using the command ‘rvm gemdir’ to outputthe path to the gem directory that my application is using, changing to thatdirectory, and opening the source using Textmate. This is a time consumingprocess.Instead, it’s useful to simply unpack a gem into your Rails application sothat it loads from the vendor/gems directory. I’m currently using thefollowing command to unpack the RefineryCMS gems into my Rails app forreference.gem unpack refinerycms --version 2.0.9 --target vendor/gemsgem unpack refinerycms-core --version 2.0.9 --target vendor/gemsgem unpack refinerycms-dashboard --version 2.0.9 --target vendor/gemsThese unpacked gems are not referenced in my Gemfile, so I expect thereshouldn’t be any conflicts. The source is just there for my referencingpleasure."
} ,
{
"title" : "You can be a programmer too!",
"category" : "",
"tags" : "",
"url" : "/2012/12/you-can-be-a-programmer-too/",
"date" : "2012-12-28 05:52:52 -0500",
"content" : "I’ve been telling people to check out CodeSchool.com because it has well laidout interactive courses that you can take to learn advanced web developmenttechnologies (jQuery, Coffeescript, Rails, etc).However, it doesn’t have the prerequisite courses on HTML, CSS, Javascript, andRuby.It turns out that CodeAcademy.com covers those languages.Of course not just anyone can be a web developer, but if you’re nerdy and thisstuff excites you, then go for it. I have a job working in San Francisco as aprofessional developer, doing Ruby on Rails programming. No college. Justexperience. Just takes will and determination, and the effort to get experienceeven if it means taking a pay cut for a while.If you’re looking for something a little more low level, like algorithms or anintro to computer science, check out Udacity."
} ,
{
"title" : "Spree Extension Development Environment using RVM",
"category" : "",
"tags" : "Spree, extension",
"url" : "/2012/12/spree-extension-development-environment-using-rvm/",
"date" : "2012-12-25 13:30:38 -0500",
"content" : "I’ve found that there is trouble working with a Spree extension when your gemset does not include the gems included with the Spree gem itself. I discoveredthis after generating a Spree extension, confining the extension to it’s own gemset using RVM, and then running ‘bundle install’ based on the Gemfile/gemspecconfiguration of just the extension itself.To overcome this, I recommend making a folder named ‘spree’, then configuringthat folder to use a shared ‘Spree’ gem set.$ mkdir spree$ cd spree$ rvm --rvmrc --create 1.9.3-p327@spree$ cd ..$ cd spree===================================================================================== NOTICE ====================================================================================== RVM has encountered a new or modified .rvmrc file in the current directory == This is a shell script and therefore may contain any shell commands. == == Examine the contents of this file carefully to be sure the contents are == safe before trusting it! ( Choose v[iew] below to view the contents ) =====================================================================================Do you wish to trust this .rvmrc file? (/Users/jason/Sites/spree/.rvmrc)y[es], n[o], v[iew], c[ancel]> yesNext, install the Rails and Spree gems, generate a Rails application, installSpree into that application. You will use this application to explore Spree,experiment with it, etc. It’s main purpose is to install the gems that aredependencies of the Spree gem.gem install rails -v 3.2.9gem install spree -v 1.3.0rails _3.2.9_ new my_storespree install my_storeIn the same directory, generate the Spree extension.$ spree extension myextension create spree_myextension create spree_myextension/app create spree_myextension/app/assets/javascripts/admin/spree_myextension.js create spree_myextension/app/assets/javascripts/store/spree_myextension.js create spree_myextension/app/assets/stylesheets/admin/spree_myextension.css create spree_myextension/app/assets/stylesheets/store/spree_myextension.css create spree_myextension/lib create spree_myextension/lib/spree_myextension.rb create spree_myextension/lib/spree_myextension/engine.rb create spree_myextension/lib/generators/spree_myextension/install/install_generator.rb create spree_myextension/script create spree_myextension/script/rails create spree_myextension/spree_myextension.gemspec create spree_myextension/Gemfile create spree_myextension/.gitignore create spree_myextension/LICENSE create spree_myextension/Rakefile create spree_myextension/README.md create spree_myextension/config/routes.rb create spree_myextension/config/locales/en.yml create spree_myextension/.rspec create spree_myextension/spec/spec_helper.rb create spree_myextension/Versionfile ******************************************************************************** Your extension has been generated with a gemspec dependency on Spree 1.3.0. Please update the Versionfile to designate compatibility with different versions of Spree. See http://spreecommerce.com/documentation/extensions.html#versionfile Consider listing your extension in the official extension registry http://spreecommerce.com/extensions" ********************************************************************************Now that the extension is created, you’ll need to make a minor modification tothe configuration of the extensions Gemfile by forcing it to use the ‘edge’branch version of the ‘spree_auth_devise’ gem. This requires simply adding the“, :branch => ‘edge’” to the end of the existing definition in the Gemfile forthe extension. This step ensures that you do not receive any errors during thenext step.gem 'spree_auth_devise', :git => "git://github.com/spree/spree_auth_devise", :branch => 'edge'Now you will need to generate the dummy Rails application which is used by yourRspec tests. As you are generating an extension that is meant to integrate withan existing Rails application environment, with the Spree gem installed, this isneeded to ensure that you can rely on the Rails and Spree libraries beingpresent during the tests.$ bundle exec rake test_appThor has already been required. This may cause Bundler to malfunction in unexpected ways.Generating dummy Rails application...Setting up dummy database...Now, you can troubleshoot issues you run into while developing the extension bydropping into the dummy applications Rails console. The dummy application islocated under ‘spree_myextension/spec/dummy’. You should keep this applicationconfigured so that it’s configured as a stock Rails application with Spreeinstalled, with your Rspec tests generating test data in the database on the fly.At most you should modify this Rails application so that it has the necessarymodifications which you have documented for the user to make upon installingyour extension, as well as the initializers or other files that your gem hasgenerators for.Lastly you can initialize the new extension to sync up with a Git repository. Irecommend using Bitbucket if you’re not developing apublic extension, as they host unlimited private repositories for teams of upto 5 users."
} ,
{
"title" : "Creating a Gem",
"category" : "",
"tags" : "gem",
"url" : "/2012/12/creating-a-gem/",
"date" : "2012-12-25 12:50:03 -0500",
"content" : "In the past gems were created manually, or generated using the echoe gem (lastrelease Sept 21, 2011), or the Jeweler gem (last release November 7, 2011).Since then it appears that the most automated way to create a gem is by usingBundler, via the bundle gem command.$ bundle gem my_tools create my_tools/Gemfile create my_tools/Rakefile create my_tools/LICENSE create my_tools/README.md create my_tools/.gitignore create my_tools/my_tools.gemspec create my_tools/lib/my_tools.rb create my_tools/lib/my_tools/version.rbInitializating git repo in /Users/jsmith/Sites/my_tools"
} ,
{
"title" : "Ruby File Modes",
"category" : "",
"tags" : "file",
"url" : "/2012/12/ruby-file-modes/",
"date" : "2012-12-21 04:14:20 -0500",
"content" : "When working with files, you can open them in one of several modes.File.new("/file/path.txt", "w")You can find the description of these modes in theIO documentation."
} ,
{
"title" : "Return FALSE or Raise Error?",
"category" : "",
"tags" : "ruby, exception handling",
"url" : "/2012/12/return-false-or-raise-error/",
"date" : "2012-12-20 21:35:44 -0500",
"content" : "I was working on a gem a couple months ago, and it came time for my boss to doa code review before we install the gem on another teams system. My boss pointedout that there were areas where I was returning FALSE, and baking in a lot ofconditional statements and other handling, instead of using Ruby’s built infeature of error handling, which is designed to bubble exceptions up the callstack. He informed me that in situation that are not expected to occur, it’s bestto raise an exception to halt execution and report the issue. He evenrecommended that I read Exceptional Ruby, a book devoted to the subject ofproper exception handling.I didn’t understand that Ruby exceptions bubble up to the previously callingscripts, and thus can be captured and handled further up the stack. My bosspointed out that since my gem would be used only by a special controller setupon this separate system, in the service of providing an API, I could incorporateexception handling at the controller level which would handle different types oferrors.We ended up defining several custom exception classes, used for different typesof error which might occur. For instance, errors resulting from unexpected APIcalls would raise the custom MyAPI::UsageError (which inherits from standardRuntimeError), while expectations placed on their system while interfacing withits classes would raise one of the standard errors.Part of the Exceptional Ruby guide informed me that you don’t always have to usea begin..rescue..end block. You can simply insert a rescue block at the end amethod, thus rescuing all statements before it. As you can see, we simplyrescued the types of exceptions caused by the system calling the API so that itwould simply return the error in the response, instead of raising the error inthe remote systems Airbrake logs.def handler raise MyApi::UsageError, "Request must be HTTP POST" unless request.post? @response = Kabam::GmoApi::Server.handler(params[:json_request]) render :text => @response.to_jsonrescue MyApi::InvalidInputError, MyApi::UsageError => e error_response = MyApi::Response.new(:status => 'failure', :result => e.message) render :text => error_response.to_jsonendFor reference, here is the hierarchy of standard Ruby exceptions which you canuse, or inherit from for your own custom exception classes.Exception NoMemoryError ScriptError LoadError NotImplementedError SyntaxError SignalException Interrupt StandardError ArgumentError IOError EOFError IndexError LocalJumpError NameError NoMethodError RangeError FloatDomainError RegexpError RuntimeError SecurityError SystemCallError SystemStackError ThreadError TypeError ZeroDivisionError SystemExit fatal"
} ,
{
"title" : "When Testing Seems Pointless",
"category" : "",
"tags" : "testing, tdd, unit-testing",
"url" : "/2012/12/when-testing-seems-pointless/",
"date" : "2012-12-19 05:51:53 -0500",
"content" : "I remember when I was first exposed to the concept of test driven development(TDD), it seemed like you were writing a test that did the same thing as thefunction itself. This really left me perplexed as to why everyone was ravingabout it’s value.Take for instance the following method:class SomeClass def self.todays_date Time.now.strftime("%Y-%m-%d") endendAll this method does is return the date in ‘YYYY-MM-DD’ format. This might beused to name a log file, or in an ActiveRecord finder method call.The unit test for this method, using RSpec, would look like this:describe ".todays_date" do it 'returns todays date in YYYY-MM-DD format' do result = Contest.todays_date result.should == Time.now.strftime("%Y-%m-%d") endendDoesn’t that just seem silly? Yes!But one thing to remember about tests is that they ensure that parts of yourapplication do what you expect them to do. It’s only with simple methods likethis that you have tests that are so simple that they seem useless. However,even in this case, the test shown above is not a total waste. With this test inplace, I can trust that any other part of my application which needs to use thismethod can do so and expect the same result. This test is my enforcer, ensuringthat the contract between that method and the rest of my code is maintained, acontract which says SomeClass.todays_date will always return todays date in‘YYYY-MM-DD’ format.Ensuring that the modules, classes, and objects you’ve designed provide theexpected interface, perform the expected actions, and return the expectedresults, makes it so that you can move on and code other parts of your systemwithout worrying if you should handle some situation where another entity mightfail. You can focus on one unit at a time, and switch to the context of thedependencies and integration between the units when necessary, without havingto think of them all at once within the limited memory of your human mind.For more complex methods, the test helps you catch errors as you create themethod, and in the future when you modify the method. The tests even act as aform of documentation, as they provide an example of how the rest of your codemight interface with your method.There is surely a learning curve to unit testing like this, and integrationtesting which ensures your units work together as expected. Once you developthe skills necessary to employ testing in your application, you’ll realize thatthe peace of mind obtained once your system becomes a large system is veryvaluable. It allows you to be more agile, not needing to be extra careful. Ableto bring on new developers that aren’t completely familiar with your code likeyou are. Refactoring major parts of the system are also easier, as the testspoint out every area of the application which isn’t behaving in a way which therest of the application expects."
} ,
{
"title" : "Using Rspec to Test Controllers",
"category" : "",
"tags" : "rspec, controller",
"url" : "/2012/12/using-rspec-to-test-controller/",
"date" : "2012-12-13 04:46:41 -0500",
"content" : "Here are some tips that will help you with Controller tests in Rspec.Common Response Methods# Get HTTP response code with message. Example: "302 Found"response.status# Get HTTP response code. Example: 200response.response_code# Get response bodyresponse.body# Get location header, used with redirectsresponse.locationCommon Matchers# Check for successful response, same as response.success?.should be_trueresponse.should be_success# Check if a specific template was renderedresponse.should render_template("edit")# Test if the controller method redirects to a specific path/urlresponse.should redirect_to(posts_url)# Check a global variable assigned in the controller methodassigns(:owner_id).should eq(current_user.id)Mocking or Stubbing PartialsIn this example I’m using Mocha with Rspec v1.3.2. I refer to the controlleritself in this Controller test as ‘controller’. The ‘render’ or‘render_to_string’ methods are part of the controller itself. In this exampleit’s rendering a partial to a string, and including it in a JSON hash beingreturned to an AJAX call.# Controller method uses render_to_string to render a partial to HTML string, includes in JSON responsecontroller.expects(:render_to_string).with(:partial => 'comment_block', :locals => {:post => post}).returns("comment block content").at_least_onceIt’s not advisable that you use a helper directly inside of a controller, andthus you shouldn’t need to stub one from within a controller method spec.Helpers should be used within views, otherwise your “helper” method should existin a model, or in a utility library in /lib, so it’s available in the controlleror elsewhere.Although this post isn’t on View testing, this article helps explain how tomocking partials and helpers in views."
} ,
{
"title" : "Good Guy Greg",
"category" : "",
"tags" : "testing, rspec",
"url" : "/2012/12/good-guy-greg/",
"date" : "2012-12-12 00:48:15 -0500",
"content" : ""
} ,
{
"title" : "Using Rails 2.3.8",
"category" : "",
"tags" : "rails, Rails-2.3.8, bundler",
"url" : "/2012/12/using-rails-2-3-8/",
"date" : "2012-12-03 22:57:38 -0500",
"content" : "I’m working on a project that is stuck on Rails 2.3.8 due to the size andcomplexity of the codebase. Upgrading it would be a nightmare. I recently raninto an issue with the database_cleaner gem, which isn’t rolling backtransactional queries properly. I’m not sure if the issue is with the gem, orperhaps some configuration with the system (ActiveRecord) which is causing theissue. Because of this, I’m wanting to create a dummy Rails 2.3.8 applicationso that I can reproduce the issue on a fresh, simple, vanilla Rails application.I created a new ‘rails238’ directory and switched to it, then created a newgemset via RVM.rvm --rvmrc --create 1.8.7@rails238I then installed Rails 2.3.8.gem install --version '2.3.8' railsAfter this finished, I ran into an error when I would try to create a new Railsapp.$ rails -d mysql funtownauto/Users/jsmith/.rvm/gems/ruby-1.8.7-p371@rails238/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:55: uninitialized constant ActiveSupport::Dependencies::Mutex (NameError) from /Users/jsmith/.rvm/rubies/ruby-1.8.7-p371/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:36:in `gem_original_require' from /Users/jsmith/.rvm/rubies/ruby-1.8.7-p371/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:36:in `require' from /Users/jsmith/.rvm/gems/ruby-1.8.7-p371@rails238/gems/activesupport-2.3.8/lib/active_support.rb:57 from /Users/jsmith/.rvm/rubies/ruby-1.8.7-p371/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:36:in `gem_original_require' from /Users/jsmith/.rvm/rubies/ruby-1.8.7-p371/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:36:in `require' from /Users/jsmith/.rvm/gems/ruby-1.8.7-p371@rails238/gems/rails-2.3.8/lib/rails_generator.rb:31 from /Users/jsmith/.rvm/rubies/ruby-1.8.7-p371/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:36:in `gem_original_require' from /Users/jsmith/.rvm/rubies/ruby-1.8.7-p371/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:36:in `require' from /Users/jsmith/.rvm/gems/ruby-1.8.7-p371@rails238/gems/rails-2.3.8/bin/rails:15 from /Users/jsmith/.rvm/gems/ruby-1.8.7-p371@rails238/bin/rails:19:in `load' from /Users/jsmith/.rvm/gems/ruby-1.8.7-p371@rails238/bin/rails:19 from /Users/jsmith/.rvm/gems/ruby-1.8.7-p371@rails238/bin/ruby_noexec_wrapper:14This was resolved by downgrading the default Rubygems to version 1.4.2.gem update --system 1.4.2I ran through the instructions to setup Bundler with a Rails 2.3.8 system,manually created the ‘Gemfile’ in the root directory of the project, then ran‘bundle install’.This included RSpec for Rails 1.3.4.source :rubygemsgem 'rails', '2.3.8'gem 'mysql2', '~> 0.2.11'gem 'rdoc'gem 'bundler'group :development, :test do gem 'rspec', '~> 1.3.2' gem 'rspec-rails', '~> 1.3.4' gem 'database_cleaner', '0.8.0' gem 'fabrication', '~> 1.3.2'endI installed Rspec, then tried to list the rake tasks available, I would receivean error regarding RDoc.$ script/generate rspecConfiguring rspec and rspec-rails gems in config/environments/test.rb ... exists lib/tasks create lib/tasks/rspec.rake create script/autospec create script/spec create spec create spec/rcov.opts create spec/spec.opts create spec/spec_helper.rb$ be rake -T | grep specrake aborted!no such file to load -- rake/rdoctask/Users/jsmith/Documents/rails238/funtownauto/Rakefile:8:in `require'/Users/jsmith/Documents/rails238/funtownauto/Rakefile:8/Users/jsmith/.rvm/gems/ruby-1.8.7-p371@rails238/bin/ruby_noexec_wrapper:14(See full trace by running task with --trace)It turns out that the config in the Rakefile conflicts with the latest Rdoc(version 3.12). I had to replace “require ‘rake/rdoctask’” with:require 'rdoc/task'This resolved the rake issue.require 'rdoc/task'Now I try to run the default Rspec test script, but run into an issue with theMySQL gem.$ bundle exec rake spec!!! The bundled mysql.rb driver has been removed from Rails 2.2. Please install the mysql gem and try again: gem install mysql.rake aborted!no such file to load -- mysql/Users/jsmith/.rvm/gems/ruby-1.8.7-p371@rails238/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require'/Users/jsmith/.rvm/gems/ruby-1.8.7-p371@rails238/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require'/Users/jsmith/.rvm/gems/ruby-1.8.7-p371@rails238/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:521:in `new_constants_in/Users/jsmith/.rvm/gems/ruby-1.8.7-p371@rails238/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require'Still stuck on this issue with the MySQL gem. I have MySQL2 gem installed,which works on the other non-new project, but still this error persists. ThisStackoverflow Article seems related, but the suggested solutions don’t help."
} ,
{
"title" : "Rspec Executable Not Found",
"category" : "",
"tags" : "rvm, rspec",
"url" : "/2012/11/rspec-executable-not-found/",
"date" : "2012-11-28 04:14:27 -0500",
"content" : "I’m working on an older Rails 2.3.8 application that is way too complicated andwithout tests to make it worth upgrading to Rails 3 or higher. Because of thiswe must use RSpec-Rails 1.3.4, with RSpec 1.3.2.I was just trying to run a single test from the command line like so:$ bundle exec rspec spec/models/post.rbbundler: command not found: rspecInstall missing gem executables with `bundle install`I tried to uninstall and reinstall the gems, but still it didn’t work. I amusing RVM, so this might be part of why this command isn’t working.It turns out that older versions of RSpec used ‘spec’ instead of ‘rspec’ as theexecutable name.$ which spec/Users/jsmith/.rvm/gems/ruby-1.8.7-p371@myproject/bin/spec"
} ,
{
"title" : "Changing the Default Text Editor",
"category" : "",
"tags" : "git, text-editor",
"url" : "/2012/11/changing-the-default-text-editor/",
"date" : "2012-11-20 00:30:07 -0500",
"content" : "Certain command line utilities drop into an external text editor program toaccept certain types of input. For instance, when using the command‘crontab -e’ to edit your cron table, your default text editor program will beopened up with the current cron table configuration. The same also applies tothe Git versioning system when using the interactive rebase mode. This helpsthe program avoid supporting it’s own text editor, and allows the user tospecify their preferred text editor.To specify the default text editor, simply edit or place the followingdefinition inside of the .bash_profile file in your home directory. This exampleuses ‘/usr/local/bin/mate -w’ to specify that the Textmate editor be used.You may configure this value to reflect the path for Vim, Nano, or any othertext editor you wish to use.export EDITOR="/usr/local/bin/mate -w"It’s also possible to explicitly configure Git to use a specific text editor,thus overriding the default ‘EDITOR’ value specified in the command lineenvironment. This is useful if you only want to change the behaviour of Git, andnot affect the rest of your environment.git config --global core.editor "mate -w"UPDATE - 03/28/2013:I recently switched to Sublime 2 text editor. After installing the applicationI created a symlink like so:ln -s "/Applications/Sublime Text 2.app/Contents/SharedSupport/bin/subl" /usr/local/bin/sublAfter this was completed I added the following to my shell config file(.bash_rc / .zshrc):# Text Editorexport EDITOR=/usr/local/bin/sublIf you plan on using Sublime with utilities that expect you to save and closethe file before the utility continues, you’ll need to configure a subl_waitscript as outlined here.To use Sublime Text with Git during processes like an interactive rebase,configure it as the text editor using this command:git config --global core.editor "subl -n -w""
} ,
{
"title" : "Metaclass",
"category" : "",
"tags" : "metaprogramming, metaclass",
"url" : "/2012/09/metaclass/",
"date" : "2012-09-19 22:19:03 -0400",
"content" : "I ran into an instance of meta programming in Ruby today, in theExceptional Ruby book I’m reading for work. It seems that the theme this weekis “you don’t know Ruby as well as you could”.I might be wrong in my understanding here, but this is what I understand thusfar:Ruby stores methods for an object in it’s class, not the object itself. Objectsonly really store their attributes/variables in memory. However there exists someunseen entity known as the metaclass which belongs to each object, and it canpossibly store methods which belong to that object, but not necessarily to thatobjects class.class Person def speak puts "Hello There!" endendjohn = Person.newbob = Person.newclass << john def bark puts "Ruff! Ruff!" endend>> john.speak=> Hello There!>> bob.speak=> Hello There!>> john.bark=> Ruff! Ruff!>> bob.barkNoMethodError: undefined method `bark' for #The reference to ‘class « john’ opens a code block where methods are definedin the metaclass for ‘john’, and not the ‘Person’ class.A more thorough understanding of this is explored in this blog post -Metaprogramming in Ruby: It’s All About the Self"
} ,
{
"title" : "Using Super with Ruby class methods",
"category" : "",
"tags" : "superclass",
"url" : "/2012/09/using-super-with-ruby-class-methods/",
"date" : "2012-09-19 00:06:32 -0400",
"content" : "One of the awesome things about Ruby is that you can over-ride methods youdefine, or even over-write methods that are built into Ruby.This may not be unique with Ruby, but you can also over-ride super class methodsin your defined subclass and use ‘super’ to execute the logic defined in thesuper class version of that method.class ScumbagSteve def hello puts "Hey, can I borrow $5." endendclass GoodGuyGreg < ScumbagSteve def hello super puts "...I'll pay you back tomorrow with interest." endend>> guy = GoodGuyGreg.new=> #<GoodGuyGreg:0x100346e70>>> guy.helloHey, can I borrow $5....I'll pay you back tomorrow with interest.=> nil"
} ,
{
"title" : "Ruby Coloured Glasses",
"category" : "",
"tags" : "",
"url" : "/2012/09/ruby-coloured-glasses/",
"date" : "2012-09-18 22:46:48 -0400",
"content" : "I’m sorry Taryn. My friend just pointed your site out to me. I swear it was acoincidence. I like your background by the way."
} ,
{
"title" : "Duplicate associated records when using FactoryGirl",
"category" : "",
"tags" : "factory_girl",
"url" : "/2012/09/issues-with-duplicate-associated-records-when-using-factorygirl/",
"date" : "2012-09-10 02:26:46 -0400",
"content" : "When I decided to start using tests as part of my development practice, I hadthe choice of using the default fixture system, or using one of the recommendedfixture alternatives, also known as factories. I decided upon usingFactoryGirl given it’s popular mention on the web, and one of the projects atwork was using it.I had read much about the horrors if using fixtures, with the most memorableopinion of them being that they were ‘brittle’. This, and many other complaints,caused me to skip using fixtures and jump directly to using FactoryGirl.FactoryGirl has been pretty great. The one issue I’ve found however is that themethods for creating associations between parent and child records, andscripting the generation of those records, has proven to be poorly documentedand thus difficult to work with.After a bit of investigation I gained the understanding that the ‘association’attribute used with a factory will cause the parent factory to be generated. Ifyou need the presence of child records in your test, then you need to use the‘after_create’ callback with a code block that generates the children records.So the ‘association’ option goes up (parent), not down (child).This appears to be intended for creating multiple child records in a has_manyrelationship. Often, I really don’t like having ‘Article 1’, ‘Article 2’,‘Article 3’, etc. be the name for my content generated. I want to be able todefine a handful (or more) example records that are either used alone, orassociated with generated records.Recently on another project I had a unique relationship in place. This projectcalls for ‘words’ which have child ‘definitions’, and the definitions haveparent ‘publications’, but the ‘words’ do not have any association with‘publications’. I really didn’t know how to handle this. I’d rather keep itsimple, not so complicated. I just want FactoryGirl to create a record, andassociated records, without creating duplicates (or errors where the duplicaterecord cannot be created due to unique value constraints).So today I found the holy grail of overcoming this issue. I wish it was part ofthe official FactoryGirl documentation, because seriously this seems commonenough of an issue for me that I’d expect that others have the issue. Anyway, Ifound the answer in this StackOverflow question.You can code your factories so that they find the other factory that alreadyexists, and makes the association, or generates a new one. Not only is thispossible through this rather simple solution, but you can also implement moreadvanced creation code.So in my case I’m building a glossary system that has a ‘definition’ modelwhich has two parents, ‘phrase’ and ‘publication’. The following code implementsthis solution for creating a single ‘phrase’ parent of the ‘definition’.As you can see my specialized version of the ‘definition’ factory, named‘definition_cheese_and_spinach_omelette’ uses the get_phrase_named() method togenerate the ‘Omelette’ phrase it should belong to. It will always associatewith the phrase that has the name ‘Omelette’, whether it exists already or not.Another part of the beauty here is that I was also able to add coding to thismethod which generates the letter and ‘cached-slug’ when the Phrase with namesubmitted doesn’t already exist."
} ,
{
"title" : "Finding Records without Specific Child in Many-to-Many Relationship",
"category" : "",
"tags" : "mysql, many-to-many",
"url" : "/2012/07/finding-records-without-specific-child-in-many-to-many-relationship/",
"date" : "2012-07-17 00:57:49 -0400",
"content" : "Okay. Here is a tricky challenge. Let’s say you are coding a blog system wherePosts may have many Tags, and a tag can have many posts. Your database wouldhave a ‘posts’ table, a ‘tags’ table, and a ‘post_tags’. With Ruby on Rails thiswould be configured for an ActiveRecord model using the has_many through method.has_many :tags, :through => :post_tagsI only want posts which have an absence of a relationship with a specificrecord, which is the tag record representing ‘horrible’. How do I query for alist of posts which are absolutely without a specific tag? Like say I have a‘horrible’ tag, and I want all posts which are not tagged with ‘horrible’. Howwould I accomplish this?I’m not limiting myself to coding using an ActiveRecord relational query chain.I’m trying to accomplish this via MySQL using three test tables. I’ve identifiedthat there is hope in accomplishing this using a full outer join, however sucha query is not supported by MySQL. I’ve read other articles that have suggestedusing a union between a right and left outer join, but this didn’t provide mewith the records containing the null values I expected.I suspect that the solution would first involve devising a query that iteratesover each tag for each post, and where a record does not exist in the post_tagstable it there is a NULL value for post_tags.id. After this is accomplishedcould a ‘WHERE’ statement be added which filters results to those which have thetag.id for ‘horrible’, which have a NULL value for post_tags.id.Further searching and I found a solution that isn’t related to what Iexpected. Using a ‘NOT EXISTS’ option in the WHERE clause makes it possible toinsert a query which returns a result. If a result is returned, the parent queryincludes the result."SELECT p.id, p.titleFROM posts pWHERE NOT EXISTS ( SELECT p.id FROM tags t WHERE t.post_id = p.id AND t.tag_name IN ('horrible'))"This query is designed for a one-to-many relationship between two tables, wherethe ‘tags’ table includes the post_id and the tag in ‘tag_name’. This is atleast a little closer, but doesn’t cover my many-to-many relationshiprequirement."
} ,
{
"title" : "Listing Gems from Rails Console",
"category" : "",
"tags" : "",
"url" : "/2012/06/listing-gems-from-rails-console/",
"date" : "2012-06-21 00:59:03 -0400",
"content" : "Got this from Stack Overflow, figured it could come in handy at some point inthe future.Gem.loaded_specs.values.map {|x| "#{x.name} #{x.version}"}"
} ,
{
"title" : "Add a Serialized Hash Attribute to a Factory_Girl Definition",
"category" : "",
"tags" : "factory_girl, hash",
"url" : "/2012/06/add-a-serialized-hash-attribute-to-a-factory_girl-definition/",
"date" : "2012-06-11 23:25:13 -0400",
"content" : "I recently declared an ActiveRecord model which stores a serialized Hash insideof a text field. When I tried to setup a factory for this model usingFactoryGirl, I received many syntax errors. This is because FactoryGirlattributes expect a single value or a certain form of code block.factory :post do title "Example Post" body "This is the body of the example post" meta { "version" => 2 } created_at "2012-06-01 17:53:13" endTo include a hash as an attribute of a factory, declare the Hash separately andthen simply assign it directly in the factory definition.meta_hash = { :version => 2 } factory :post do title "Example Post" body "This is the body of the example post" meta meta_hash created_at "2012-06-01 17:53:13" endAs Joshua Clayton pointed out, one could also do the following:factory :post do title "Example Post" body "This is the body of the example post" meta { { version: 2 } } # or meta({ version: 2 }) created_at "2012-06-01 17:53:13"end"
} ,
{
"title" : "List Sorted Methods in Ruby",
"category" : "",
"tags" : "",
"url" : "/2012/06/list-sorted-methods-in-ruby/",
"date" : "2012-06-11 21:06:48 -0400",
"content" : "I often use ‘methods’ to get a list of methods available for an object in Ruby,but it can be a pain trying to look through the list for what I want. I wish itoutputed in a sorted list straight down the page. This template will help youachieve that. Maybe I should override the ‘methods’ method. Hm…"object".methods.sort.each do |method| puts method endIf you want to get only methods with a certain string inside them, use this:"object".methods.sort.each do |method| puts method if method.to_s.index('search_string') end"
} ,
{
"title" : "Updating a Serialized Object from a Web form",
"category" : "",
"tags" : "serialize",
"url" : "/2012/06/updating-a-serialized-object-from-a-web-form/",
"date" : "2012-06-06 21:19:01 -0400",
"content" : "You may run into a situation where you create some sort of standard Ruby classthat you want to associate with an ActiveRecord model. The serialize methodallows you to store an object inside of a text field for an ActiveRecord object.With Rails 2.3 support for models nested within forms was added, but it’s clearthat this support isn’t compatible with serialized objects. In my example belowI have a Post model, which represents a simple blog post. The serialized objectis instantiated from a custom class called ‘Metadata’, which stores metadata forthe post like it’s type and version.class Post < ActiveRecord::Base serialize(:meta, Metadata) accepts_nested_attributes_for :metaendclass Metadata # Type of Post attr_accessor :type # Version of Post attr_accessor :version # builds new instance using hash def initialize(params = {}) self.errors = Array.new self.type = params[:type] unless params[:type] == nil self.version = params[:version] unless params[:version] == nil endendI tried to setup the Post to accept nested attributes for my serialized field,which resulted in the following error:> post = Post.newArgumentError: No association found for name 'serialized_field'. Has it been defined yet?I tried to setup a form using the ‘simple_fields_for’ method used bySimpleform (the equivalent of fields_for.<%= simple_form_for @post do |f| %> <%= f.input :title %> <%= f.input :body, :as => 'text' %> <%= f.simple_fields_for :meta do |m| %> <%= m.select :type, options_for_select(@post_types, @post.meta.type) %> <%= m.input :version %> <% end %><% end %>This only resulted in an error with the ‘update_attributes’ method provided byActiveRecord. A more detailed Gist is available here.> post.update_attributes(:meta => { :type => 'awesome' })ActiveRecord::SerializationTypeMismatch: meta was supposed to be a Metadata, but was a HashThis causes me to suspect that Rails is not yet designed to handle the update ofserialized objects via a nested hash included with the parameters submitted bythe form. I can see that what happens is that the nested parameter hash itselfis assigned to the ‘meta’ field for the object, instead of the values for eachhash key being applied to the custom object.The solution that works is instead of defining the ‘simple_fields_for’ block aspart of the parent object which the form is being built for, simply insert it asits own floating object inside your form. In the above example the‘simple_fields_for’ method was called and passed a block within the Post form.This results in the fields being defined as part of the Post in the HTML form.<input id="post_meta_version" name="post[meta][player]" size="50" type="text">Next in your controller, assign/update the parameters for the custom objectseparately.# PUT /posts/1def update @post = Post.find(params[:id]) respond_to do |format| if @post.update_attributes(params[:post]) # Update Metadata and save again @post.meta.update_attributes(params[:metadata]) @post.save format.html { redirect_to @post, notice: 'Post was successfully updated.' } format.json { head :no_content } else format.html { render action: "edit" } format.json { render json: @post.errors, status: :unprocessable_entity } end endend"
} ,
{
"title" : "RSpec Controller Tests Receiving 'No route matches' Error",
"category" : "",
"tags" : "rspec, namespaced controller",
"url" : "/2012/05/rspec-controller-tests-receiving-no-route-matches-error/",
"date" : "2012-05-25 21:24:17 -0400",
"content" : "I’m developing a Rails engine gem for the company I’m working for, which willprovide an API for the applications we’re using. The gem I’m creating will beused with a Rails 3.0.9 system, using Rspec-Rails version 2.10.1. I had a routeto my API interface setup in the config/routes.rb file like so:Rails.application.routes.draw do match '/companyname/api_name' => 'CompanyName/ApiName/ControllerName#apimethod'endWhen I added a ‘get’ request call to my controller test, I was getting thiserror:Failure/Error: get :apimethodActionController::RoutingError: No route matches {:controller=>"company_name/api_name/controller_name", :action=>"apimethod"}I spent a great deal of time trying to figure out how to get my test to work.Different versions of Rspec, redefining my route using nested scopes, etc.It turns out I just needed to redefine my route in underscore case so thatRSpec could match it with an existing route that was defined.match '/companyname/api_name' => 'company_name/api_name/controller_name#index'I guess Rspec controller tests use a reverse lookup based on underscore case,and not camelcase). Rails will setup and interpret the route if you define itin either case though.Seems so simple now that I know the answer. Hopefully I’ll save someone elsetime with this post."
} ,
{
"title" : "Cubase Installation Failure",
"category" : "",
"tags" : "cubase, package scripts, install failure",
"url" : "/2012/05/cubase-installation-failure/",
"date" : "2012-05-23 06:48:01 -0400",
"content" : "I recently ran into issues installing Cubase 4 on my Mac running Snow Leopard.I uninstalled Cubase 5 Essential, thinking that this was causing a conflict, andthus stopping me from installing an older version. This wasn’t the caseI tried to install Cubase 5 Essential again, and I got the same type of errorwith it’s installer. I received Cubase 6 in the mail today and tried to installit…only to receive the same type of error:5/23/12 12:09:29 AM installd[483] PackageKit: Install Failed: PKG: post-install scripts for "de.steinberg.vstsounds.halionsonicseadvancedcontent"Error Domain=PKInstallErrorDomain Code=112 UserInfo=0x1005f1690 "An error occurred while running scripts from the package 'vstsounds_HALion Sonic SE Advanced Content.pkg'." { NSFilePath = "./postinstall"; NSLocalizedDescription = "An error occurred while running scripts from the package \U201cvstsounds_HALion Sonic SE Advanced Content.pkg\U201d."; NSURL = "./Contents/Packages/vstsounds_HALion%20Sonic%20SE%20Advanced%20Content.pkg -- file://localhost/Volumes/Cubase%206/Cubase%206%20for%20Mac%20OS%20X/Cubase%206.mpkg/"; PKInstallPackageIdentifier = "de.steinberg.vstsounds.halionsonicseadvancedcontent";}5/23/12 12:09:29 AM com.apple.ReportCrash.Root[541] 2012-05-23 00:09:29.688 ReportCrash[541:2803] Saved crash report for installd[540] version ??? (???) to /Library/Logs/DiagnosticReports/installd_2012-05-23-000929_localhost.crash5/23/12 12:09:30 AM Installer[439] Install failed: The Installer encountered an error that caused the installation to fail. Contact the software manufacturer for assistance.After many attempts to repair the permissions and the disk itself using theDisk Utility, I identified the cause of the issue after inspecting the crashreport that the Cubase 6 installer provided. It turns out that Steinberg usesRuby scripts to install packages. I had renamed the symlink located at/usr/bin/ruby as /usr/bin/ruby.old, and created a new one that pointed to/opt/local/bin/ruby (the location of the Ruby interpreter I had previouslyinstalled using MacPorts).After removing this symlink, and restoring the old one (which pointed to/System/Library/Frameworks/Rubyframework/Versions/Current/usr/bin/ruby), theinstallation of Cubase 6 was successful.HORRAY!"
} ,
{
"title" : "Generators Not Working in Rails 2.3.8",
"category" : "",
"tags" : "",
"url" : "/2012/05/generators-not-working-in-rails-2-3-8/",
"date" : "2012-05-16 04:14:15 -0400",
"content" : "I’m currently working on a gem that is going to use a generator to create filesin a Rails 2.3.8 application. One of the applications we’re still using is usingRails 2.3.8, so I have to make a gem compatible with that version of Rails.I installed Bundler v1.0.22 and configured it to work with the Rails app, andthen followed many instructions and various configurations to get mygenerator classes to load. Every time I would try to run the generator howeverit simply gave me the error “Couldn’t find ‘hello’ generator”.I eventually found out thatgems loaded via Git or Path via Bundler fail to load the generators. I wasable to confirm this by loading the Devise gem via Git URI, and it’s generatorsfailed to load. As I’m currently developing a gem, I need to be able to load itin dynamically via Git or via direct path…so this sucks.I found that I could use the ‘gem server’ command to run a local gem server, soI’ve been using the following commands to package up my gem and install it tothe default system RubyGems (I’m using RVM), after I’ve updated the version forthe gem in the gemspec (or version.rb file) in the gem source directory.$ gem build ~/Documents/gems/my-gem/my-gem.gemspec$ gem install --local my-gem-0.0.2.gemSuccessfully installed my-gem-0.0.21 gem installedInstalling ri documentation for my-gem-0.0.2...Building YARD (yri) index for my-gem-0.0.2...Installing RDoc documentation for my-gem-0.0.2...I then go into the Gemfile for my Rails 2.3.8 app and update the version, eachtime like so. As you can see I have a ‘source’ command for Bundler to obtaingems from my locally running Gem server.source "http://0.0.0.0:8808/"gem 'my-gem', '0.0.2'I run a ‘bundle install’, then run ‘bundle exec ruby script/generate’ to see ifthe generator defined in my gem is registering with the Rails app. Unfortunatelyso far the Devise generators are registering, but mine are not. I’m stillinvestigating.To remove the previous un-successful gems, I use this command.$ gem uninstall my-gem -v 0.0.1Successfully uninstalled my-gem-0.0.1Further investigation just revealed that somehow my gems are being installed,but are empty (no files). It’s obvious now why my generators aren’t registering,they aren’t even declared in a file. I found that I needed to make the commitsto my Git repository for my gem, and push the changes, and then run the‘gem build’ command from within the gem directory itself.Prior to this it got these errors:$ gem build ~/Documents/gems/my-gem/my-gem.gemspecfatal: Not a git repository (or any of the parent directories): .gitfatal: Not a git repository (or any of the parent directories): .gitfatal: Not a git repository (or any of the parent directories): .gitWARNING: no homepage specified Successfully built RubyGem Name: my-gem Version: 0.0.3 File: kabam-gmo-api-rails2-0.0.8.gemIf I run ‘gem build my-gem.gemspec’ from inside of the gem folder, these errorsdid not occur. I simply rebuilt my gem, and another gem I also developed whichit depends on , installed them in the default system gem set (which makes themavailable via my gem server). Then I ran ‘bundle install’ on my Rails 2 app,then ‘bundle exec ruby script/generate’ and now I see my generator.[UPDATE] Sept 11, 2012:I’m finally deploying my app, which relies on this gem I’ve developed with aRails 2 generator. We don’t yet have a gem server setup, so I’ll have to relyon the application loading the gem via Bundler from the Git repository. Asnoted above, the generators I’ve included aren’t listed as available when I run‘bundle exec script/generate’, nor are they executable when trying to run‘bundle exec script/generate kabam_gmo_api_install’Further investigation shows that gems loaded from a gem server are placed in anaccessible path, however gems originating locally or from git are loaded undera bundler gems path.$ bundle list kabam-gmo-api-rails2/Users/jsmith/.rvm/gems/ruby-1.8.7-p358@myapp/bundler/gems/gmo-api-rails2-7296b5229c7c$ bundle list rspec/Users/jsmith/.rvm/gems/ruby-1.8.7-p358@myapp/gems/rspec-1.3.2I suspect that this alternative gem path just isn’t present in the environmentwhen bundler installs via Git or local.I found an article by Yehuda Katz that explains the internals of Bundlerfurther. This issue just isn’t worth the time and effort to resolve.I’ve just included instructions for building the gems from source, installing toGem path, running ‘bundle install’ with just the install gem names, runninggenerators, restoring the Gemfile configuration to use Git repo URI’s, andrunning ‘bundle install’ again."
} ,
{
"title" : "Establishing New Ruby Environment in a Folder using RVM",
"category" : "",
"tags" : "rvm",
"url" : "/2012/05/establishing-new-ruby-environment-in-a-folder-using-rvm/",
"date" : "2012-05-15 20:58:16 -0400",
"content" : "I know this is documented on the official RVM website, but I hate having tolook it up over and over again each time I want to create a new RVMRC file.$ mkdir -p ~/projects/rails2test$ cd ~/projects/rails2test$ rvm --rvmrc --create 1.8.7@rails2test$ cd ..$ cd rails2test=============================================================================== NOTICE ================================================================================ RVM has encountered a new or modified .rvmrc file in the current directory == This is a shell script and therefore may contain any shell commands. == == Examine the contents of this file carefully to be sure the contents are == safe before trusting it! ( Choose v[iew] below to view the contents ) ===============================================================================Do you wish to trust this .rvmrc file? (/Users/jmiller/Documents/rails2-apps/.rvmrc)y[es], n[o], v[iew], c[ancel]> yI’m needing to setup a Rails 2.3.8 system, so I can test my gem forcompatibility between it and Rails 3.0.9.I stumbled onto an article with suggestions for how to install RubyGemsfor Rails 2.3.8. This seems to run with errors, but the last command seemed tocomplete without errors other than ‘README’ not found:$ rvm all do gem install -v 1.4.2 rubygems-update$ rvm gem update --system 1.4.2$ update_rubygems$ rvm all do gem install -v 2.3.8 rails$ rails --versionRails 2.3.8$ ruby --versionruby 1.8.7 (2012-02-08 patchlevel 358) [i686-darwin10.8.0]"
} ,
{
"title" : "History of the Canonical Gem Host for Ruby Gems",
"category" : "",
"tags" : "",
"url" : "/2012/05/history-of-the-canonical-gem-host-for-ruby-gems/",
"date" : "2012-05-14 21:41:23 -0400",
"content" : "The default repository for downloading gems using RubyGems was originallyRubyForge.org. This was likely because the RubyGems project was hosted onlyfrom RubyForge. This meant that when you ran ‘gem install rails’, your RubyGemsinstallation was configured to download the gem from ‘gems.rubyforge.org’. InAugust of 2008 Github started gaining popularity amongst the Ruby communityafter it started providing it’s own gem server via gems.github.com. Thisresulted in many Ruby developers configuring RubyGems to use the Github serveras a secondary source of gems.In August of 2009 a new gem hosting repository came onto the scene known asGemCutter, aiming to resolve issues caused by how Github and RubyForge werehandling hosting of gems. In September they decided to move the service toRubyGems.org. In October of Githubdiscontinued building and serving gems from gems.github.com, andRubyGems.org became the official default host for Ruby gems."
} ,
{
"title" : "Using Serialize Option with ActiveRecord Objects",
"category" : "",
"tags" : "rails3.1, forms",
"url" : "/2012/04/using-serialize-option-with-activerecord-objects/",
"date" : "2012-04-25 22:01:26 -0400",
"content" : "Documentation seems to be more available on how to build forms with checkboxes or a multiple select field for ActiveRecord objects that have a has_manyor has_many_and_belongs_to association with other ActiveRecord objects.This article shows you how provide a multiple select form based on a customdefined array, with the selected options stored in a single attribute of yourActiveRecord object.Lets say you are working on a form for a blog post that needs a multi-selectfield of statically defined adjectives, with the one or many adjectives savedto one field for the post.def self.adjectives [ 'awesome', 'phenomenal', 'terrific', 'fantastic', 'amazing', 'outstanding', 'stupendous', 'great', 'incredible', 'magnificent', 'impressive', 'excellent', 'sensational', 'fantasmagoric', 'legendary', 'marvelous' ]endNext, inside of your model, insert a line indicating the name of the string ortext field you’re going to use to store the serialized values from the form.class Post < ActiveRecord::Base serialize :positive_adjectives, ArrayendIn the view file for your form, insert the following tag to create a selecttag which loads all the options with the previously selected ones saved to thepost field in a single field, serialized in YAML format.<%= select_tag 'post[positive_adjectives]', options_for_select(Post.adjectives, @post.positive_adjectives), { :multiple => true, :size => 10 } %>It appears that there are methods, possibly native ones for Rails 3.2.2soon, for storing your objects in the database in JSON format instead of YAML."
} ,
{
"title" : "Save the Tests, Don't Throw Them Away",
"category" : "",
"tags" : "testing, tdd",
"url" : "/2012/04/save-the-tests-dont-throw-them-away/",
"date" : "2012-04-20 20:48:02 -0400",
"content" : "So it’s been several weeks since I started using test driven development. I’musing FactoryGirl instead of fixtures, because I’ve heard that fixtures arelimiting. I’d rather just write Factories from the beginning. I’m also usingstandard Test::Unit based unit and functional tests. Haven’t touched onintegration testing yet.As I’ve gone along and written these tests, I’ve learned how to do thingseffectively and am carving out my own style.For instance you could either test for a link with the content ‘Edit’ on thepage, or you could add an ID or class to the link and test for the presence ofthat link with that class. I found that I’d rather do both just to ensure thatthe button is or is not present in a certain circumstance.assert_select "a.btn", { :count => 0, :text => "Edit"}, "Edit button shouldn't be present for sent request"assert_select "a.btn", { :count => 0, :text => "Delete"}, "Delete button shouldn't be present for sent request"assert_select "a.edit-request", false, "Edit button shouldn't be present for sent request"assert_select "a.delete-request", false, "Delete button shouldn't be present for sent request"It may seem like a lot more coding is necessary for the tests than the codeitself which it is testing. This is true, and is one of the reasons I wasleery about writing tests. But here is the thing I wasn’t aware of, at leastnot in the way I understand it now.First off, it may look like a lot more code, but really it’s just a lot ofduplicated code that varies. Like in the above example, it’s really two typesof tests applied to two different buttons on the page. One type tests toensure a link with the ‘btn’ class (used by Twitter Bootstrap) with certaintext content isn’t present, and the other checks to make sure a link with aspecific class name isn’t present. I did this so that if someone else changesthe text of the button to say ‘Edit Request’, the class test will still catchif the link/button is present when it’s not supposed to. Now that I have thistype of test in place, when I need it again I can just copy and paste thesetests for the correct syntax, then modify them to meet the needs of the newpage I’m testing. So ultimately, it’s not that much more coding. It’s kind oflike when a person looks at a mixing console in a recording studio and thinks“Wow, how can that guy understand what all those knobs and buttons do?”. Oncethey realize that you just have to understand one column of those knobs andbuttons, and that each other column applied to a different instrument in themusic mix, all of a sudden understanding it becomes extremely simplified. It’sthe same thing with testing.Another point is that you don’t have to write tests for every single littlething. For my application, requests that have been sent to a remote systemlater should not be edited or deleted. I don’t want people trying to usebuttons that shouldn’t be showing on the screen, which is why I wrote thetests above. I know that I’d receive complaints from the users using theinterface if those buttons are showing, and if they result in error whensomeone attempts to use them in the wrong context…or worse if they result ina sent request being modified, and thus an accurate history for that requestis compromised instead of protected. However I found that when I was editingone of these requests, and the data I entered was invalid, a select form fieldthat relied on a collection of options rendered as a text field instead of aselect field. This is one of those special bugs that doesn’t necessarilycompromise a core function of the system. It’s not going to result in somemajor failure or side effect that would warrant a test, so I’ve chosen to justfix the bug and move on without a test for that scenario. If the issue doespop up in practice more than I expect, then I’ll write a test to avoid ithappening again, but for now I’m picking and choosing my battles based on myexperience and expectations.The main point of this post that I wanted to make however is that you’re doingit anyway. If you’re not using test driven development, you’re testing yourmodels in the Rails console, or opening your browser and testing the appwithin the development environment. For model method testing, once you closeyour terminal window the coding you wrote in the IRB console is gone forever.The same principle of loss applies to browser based tests. You have to adjustthe records in your development environment often, and go through a specialsequence of events just to get the right state in your development environmentwith a browser, just to test one single result. This is EXTREMELY timeconsuming and laborious. With unit tests you can test the result of eachmethod, and are actually more likely to implement tests using rare cases youwould be too lazy to test for otherwise. With functional tests, theenvironment needed to properly test a specific scenario is much easier tosetup and maintain, and is performed in seconds instead of minutes (or evenhours).I’m not yet opinionated regarding other testing options such as Rspec,Cucumber, etc. however I definitely see the benefit of picking up a habit ofwriting tests as the solution to a major waste of time later on after yourproject has grown. I have much less anxiety regarding unexpected bugs, and Iknow that once one is identified it can be fixed and a test can be written forit if necessary."
} ,
{
"title" : "Factory Girl Associations and Records Persisting Across Tests",
"category" : "",
"tags" : "rails3.1, testing, factory_girl, tdd",
"url" : "/2012/04/factory-girl-associations-records-persisting-across-tests/",
"date" : "2012-04-12 01:32:23 -0400",
"content" : "I just recently started to adopt test driven development practices. Theproject I’m working on needs to get done soon, and I didn’t want to get heldup learning Rspec. After much consulting with other developers at thecompany I work for, I had decided to use basic Test::Unit tests withFactoryGirl factories instead of fixtures, and adopt Shoulda if a scenarioarises where the options it provides (contexts) are needed.So far things have been running well, and I’m starting to understand just howimportant testing is. You don’t have to write tests for every single thing youdo, but if you implement some sort of feature that you seriously don’t want tobreak at some point in the future, setup a test for it. Once you setup a testfor one type of feature, you can re-use the code later for similar testing. Sodon’t worry about how long it takes the first time around, it will pay offlater when that function isn’t broken because you caught it. I didn’t realizeit, but errors you didn’t expect it to directly catch, like the dreaded“undefined method ‘foo’ for nil:NilClass” exception, also popup periodicallyand alert you that you broke something, even though your test wasn’t built tocatch those. This is nice because you might change something in a model, andthen all of a sudden something in a view is broken.Earlier today I had a new functional test I wrote fail because it wasexpecting a view to render an index of child records that Factory Girl wasn’tcreating. For this example ‘User’ has multiple ‘Posts’, and the post recordsbelonging to the user weren’t being generated. I expected that FactoryGirl wasActiveRecord aware, and would simply create the dependent post records, butthat’s not the case. Much research online, sifting through articles with theolder syntax used by previous version of FactoryGirl, led to much confusion.For a bit I was thinking that one needs to declare associations for children,instead of parents, and then use Factory.create on the highest level parent sothat all the children records are generated before your test. This wasn’t thecase.It turns out that you should only use associations to define the relationshipinside of a child factory for it’s parent, not the other way around. If you doit the other way around, you’ll end up getting ‘stack level too deep’ errors.I expected that perhaps there would be some sort of way of defining that afactory should create or build the children records, which are definedseparately, but for practicality it doesn’t seem this is the case. Instead,Factory Girl expects you to use the ‘after_create’ callback to causeassociated children records to be created. I guess this makes sense, as itwould be too redundant to create multiple factories in a separate file, muchlike you define child fixtures. Its more encapsulated to generate the childrenwith the parent in the same code block.factory :user do association :group name "John Smith" created_at "2011-04-11 12:00:00" factory :user_with_posts do # default to 5 posts ignore do posts_count 5 end after_create do |user, evaluator| FactoryGirl.create_list(:post, evaluator.posts_count, user: user) end endendThe above declaration would allow one to create a user with 5 posts bydefault, or create one with 15 posts instead. Ironically enough, I figuredthis out by referring to the official FactoryGirl GETTING STARTED docs,after searching elsewhere on the internet.FactoryGirl.create(:user).posts.length # 0FactoryGirl.create(:user_with_posts).posts.length # 5FactoryGirl.create(:user_with_posts, posts_count: 15).posts.length # 15I’ve been creating factories instead of fixtures, using factories exclusivelyin my application. I assumed that somehow when I ran tests that Test::Unitand/or FactoryGirl would automatically create and destroy the records whichare created for each test, so that there is a clean slate each time anindividual test is run. Further investigation pointed to the term being‘transactional’, with a deprecated ‘use_transactional_fixtures’ setting thatwas declared as either TRUE or FALSE for ActiveSupport::TestCase. It appearsthis is the default now for tests.Once I added a factory that generates a parent with children records, I hadanother controller test report an error where the assert_select didn’t findthe form with ID I had expected. That ID was for one of the children records,with a form expected using id “edit_message_1”, but was instead getting“edit_message_41”. Further investigation suggested that records are persistingacross tests.Then I inspected log/test.log, and saw that there were transactional commandsoccurring with BEGIN and ROLLBACK commands used by MySQL. (0.1ms) BEGIN (0.1ms) SAVEPOINT active_record_1 SQL (0.2ms) INSERT INTO `posts` (`title`, `body`, `created_at`, `updated_at`) VALUES ('Foo', 'This is Foo', '2012-04-12 03:56:24', '2012-04-12 03:56:24') (0.1ms) RELEASE SAVEPOINT active_record_1 (0.1ms) SAVEPOINT active_record_1 SQL (0.2ms) DELETE FROM `posts` WHERE `posts`.`id` = 1 (0.1ms) RELEASE SAVEPOINT active_record_1 (0.4ms) ROLLBACKAnd yet for each ID of each record created by Factory Girl the number wasincremented each time. I was perplexed why the tests were transactional,however the ID’s were being incremented. I’m using MySQL 5.5.19, with InnoDBtables, so the transactional commands being used should be supported.I then spoke with a co-worker and he informed me that the tests do usetransactions, so the records are removed after each test. The transactionalqueries however do not stop the MySQL database from auto-incrementing the IDfor the other additional records. He advised that I simply not write the teststo rely on a specific ID, but instead rely on something static in the viewlike the class name for an element, or that there is the correct number ofelements for a given class.Further more he recommended not relying on view testing so much via theFunctional/Controller tests and to instead do more of that via integration oracceptance tests, such as those using Capybara to do a full-stack test fromthe browser perspective."
} ,
{
"title" : "Generating Test File Stubs for Existing Models, Views, and Controllers",
"category" : "",
"tags" : "rails3.1, testing, rspec, tdd",
"url" : "/2012/04/generating-rspec-tests-for-existing-models-views-controllers/",
"date" : "2012-04-03 16:10:12 -0400",
"content" : "I’ve noticed that if you install certain testing gems, like Factory Girl, orRspec, that your Rails application will create test files for these librariesinstead of using the defaults. Even further you can configure the generatorsused by your Rails app in /config/application.rb# Configure generators values. Many other options are available,# be sure to check the documentation.# http://edgeguides.rubyonrails.org/generators.html#customizing-your-workflowconfig.generators do |g| g.stylesheets false g.test_framework :rspec g.fallbacks[:rspec] = :test_unit g.fixture_replacement :factory_girlendI’ve been anxious however in deciding which testing tools to learn and usewith my project. If I choose the wrong one, then all the scaffold generatedtest code will be generated for the test framework I might choose to quitusing at some point.This is not completely correct though, as I’ve discovered.I found that the following commands will generate the empty spec files.rails generate rspec:modelrails generate rspec:viewrails generate rspec:controllerEven better though, if you’re looking for scaffold style files to be put inplace, use the following syntax to generate scaffold code.rails generate rspec:scaffold Post title:string body:textIf you decide to use basic Test::Unit based tests, instead of going withRspec, you can also reconfigure your app to use Test::Unit again withgenerators, and then use the rake commands to generate the files that weregenerated via rspec.$ rails gUsage: rails generate GENERATOR [args] [options]General options: -h, [--help] # Print generator's options and usage -p, [--pretend] # Run but do not make any changes -f, [--force] # Overwrite files that already exist -s, [--skip] # Skip files that already exist -q, [--quiet] # Suppress status outputPlease choose a generator below.TestUnit: test_unit:controller test_unit:helper test_unit:integration test_unit:mailer test_unit:model test_unit:observer test_unit:performance test_unit:plugin test_unit:scaffold"
} ,
{
"title" : "Rails 3 and Subclasses Method",
"category" : "",
"tags" : "rails3.1",
"url" : "/2012/03/rails-3-and-subclasses-method/",
"date" : "2012-03-26 21:38:14 -0400",
"content" : "I was just trying to create coding that reflectively loads the subclasses of aclass I’ve defined. The idea is that as new subclasses are added, the scriptI’m writing can detect which ones are present and inform a remote API thatsupport for a specific API feature is available.I had executed the method once on the class, and it did return the name of thesubclass that I had defined and checked for the existence of in the Railsconsole environment. Then I added other subclasses, reloaded (reload!), andran the method once again. This time I got nothing but an empty array returned.It turns out that Rails 3 uses autoloading for classes…so the subclasseshave to be referenced at some point, and thus loaded into memory, before thesubclasses method will include them in the list."
} ,
{
"title" : "Locate and Updatedb with Homebrew",
"category" : "",
"tags" : "os x lion, findutils, homebrew",
"url" : "/2012/03/locate-and-updatedb-with-homebrew/",
"date" : "2012-03-26 17:08:59 -0400",
"content" : "UPDATE: I ran into errors and decided to not use the findutils provided byHomebrew.When you run the locate command, the system will now tell you to run theservice that creates the file database:WARNING: The locate database (/var/db/locate.database) does not exist.To create the database, run the following command: sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.locate.plistPlease be aware that the database can take some time to generate; oncethe database has been created, this message will no longer appear.You’re better of simply running this and then using the built in ‘locate’ command.I used to use MacPorts to ensure that my command line environment on my Macwas almost exclusively using MacPort provided binaries, not the built inbinaries and libraries that are packaged with Mac OS X.I had heard of Homebrew, but MacPorts seemed to work fine for me. Then Irealized that Homebrew does the same thing, but installs software in/usr/local, which doesn’t require sudo. The benefits of Homebrew seem to besimplicity, lack of intrusiveness, and speed. I’m likely going to use itas the package manager in my Rails developer toolkit in the future.I just noticed that I am not able to use the ‘locate’ command to search forcertain matching filenames on my system. I love using this option piped intogrep to find what I’m looking for, such as the path to a particular gem I’mneeding to inspect the code for.I installed the ‘findutils’ package that includes ‘locate’ via Homebrew.$ brew install findutils==> Downloading http://ftpmirror.gnu.org/findutils/findutils-4.4.2.tar.gz######################################################################## 100.0%==> ./configure --prefix=/usr/local/Cellar/findutils/4.4.2 --program-prefix=g--disable-debug==> make installWarning: Non-libraries were installed to "lib".Installing non-libraries to "lib" is bad practice.The offending files are:/usr/local/Cellar/findutils/4.4.2/lib/charset.alias==> Summary/usr/local/Cellar/findutils/4.4.2: 19 files, 1.2M, built in 68 secondsI thought that the warning regarding non-libraries being installed to “lib”stopped the package from being installed properly, but it turns out that theexecutables were installed with symlinks placed in /usr/local/bin (which is inmy path) and pointing to the actual installed binaries. Instead of ‘locate’and ‘updatedb’, the commands are ‘glocate’ and ‘gupdatedb’.As advised by Grogs, I updated my .bash_profile file to set theLOCATE_PATH to point to the database in my local users tmp directory.I didn’t want to have to setup a running cron daemon on my Mac, and I’m justfine with running the updatedb command manually when needed, so I simply addedaliases to build and locate using the proper executable filenames.alias updatedb="gupdatedb --localpaths='/Users/jmiller' --output='/Users/jmiller/tmp/locatedb'"alias locate="glocate"export LOCATE_PATH="~/tmp/locatedb"Before this would work though, I did need to create a ‘tmp’ folder in my homedirectory.mkdir ~/tmpUPDATE:This wasn’t successful however. I started to get an error:$ updatedb/usr/bin/sort: string comparison failed: Illegal byte sequence/usr/bin/sort: Set LC_ALL='C' to work around the problem./usr/bin/sort: The strings compared were `/USERS/JMILLER/LIBRARY/APPLICATION SUPPORT/TEXTMATE/BUNDLES/SCSS.TMBUNDLE/COMMANDS/INCREASE NUMBER.TMCOMMAND' and `/USERS/JMILLER/LIBRARY/APPLICATION SUPPORT/TEXTMATE/BUNDLES/SCSS.TMBUNDLE/COMMANDS/INSERT COLOR302200246.TMCOMMAND'.I’m going to just use the locate/updatedb options which come packaged with MacOS X."
} ,
{
"title" : "Foreign Key References when Generating Model",
"category" : "",
"tags" : "migrations",
"url" : "/2012/03/foreign-key-references-when-generating-mode/",
"date" : "2012-03-23 21:15:26 -0400",
"content" : "I forget the proper syntax for a model generation command that includes areference to another models id (foreign key).Here is an example you can use to remember:rails g model Post user:references title:string body:textSince the ‘user’ model already exists, Rails knows that this should be theuser_id field that it generates. I guess it’s not a big deal, you could justdo ‘user_id:integer’, but what fun is that?"
} ,
{
"title" : "Edit Devise User without Password",
"category" : "",
"tags" : "rails, devise",
"url" : "/2012/03/edit-devise-user-without-password/",
"date" : "2012-03-20 19:38:43 -0400",
"content" : "I recently setup a custom controller to edit/update my Admin accounts, which areauthenticated using Plataformatec’s Devise gem.I found an article in the Devise Wiki that mentions using some sort of‘update_without_password’ method to update the model without requiring thepassword. In this case I’m not requiring the user to provide their ownpassword to edit their info. I’m allowing them to do it straight out.The solution that I found was to simply remove the ‘password’ and‘password_confirmation’ from the parameter set if both are blank.# remove password parameters if blankif params[:admin]['password'].blank? &amp;&amp; params[:admin]['confirmation'].blank? params[:admin].delete('password') params[:admin].delete('password_confirmation')end"
} ,
{
"title" : "Factory Girl Not Generating Factories with Scaffold",
"category" : "",
"tags" : "rails3.1, factory_girl",
"url" : "/2012/03/factory-girl-not-generating-factories-with-scaffold/",
"date" : "2012-03-19 17:18:44 -0400",
"content" : "I just started a new Rails 3.2 project, and to ensure that the proper testfiles are generated using Shoulda or Factory_Girl, I’ve installed those gemsand configured the application to generate the test files using these gems.Added to config/application.rb: # Configure generators values. # http://guides.rubyonrails.org/generators.html config.generators do |g| g.stylesheets false g.test_framework :shoulda g.fallbacks[:shoulda] = :test_unit g.fixture_replacement :factory_girl endEach time I would try to create a new scaffold, it would use shoulda togenerate the test unit file, but would generate a YAML fixture.invoke active_recordcreate db/migrate/20120319180004_create_posts.rbcreate app/models/post.rbinvoke shouldacreate test/unit/post_test.rbcreate test/fixtures/posts.ymlOver a year ago there was a gem needed to ensure that generators werepresent to generate Factory_Girl factories instead of YAML fixtures, but thecode for those generators was moved to the official Factory_Girl gem, sothat’s not the cause of this issue.It turns out that I had configured factory_girl_rails in my Gemfile only underthe ‘test’ group, and not the ‘development’ group as well.group :development, :test do gem 'shoulda' gem 'factory_girl' gem 'factory_girl_rails'endAfter configuring these under both test and development, the scaffoldgenerator created the factory under ‘test/factories’ as I had expected."
} ,
{
"title" : "Ruby Comparison Operator =~",
"category" : "",
"tags" : "comparison operator, ruby",
"url" : "/2012/03/ruby-comparison-operator/",
"date" : "2012-03-07 21:26:34 -0500",
"content" : "I saw this in some code recently, wasn’t sure what it did.It basically returns TRUE or FALSE if there is a regular expression match,with the regular expression coming after the ‘=~’."
} ,
{
"title" : "Invalid Gemspec Error Regarding Invalid Date Format",
"category" : "",
"tags" : "rails3.1, rubygems",
"url" : "/2012/02/invalid-gemspec-error-invalid-date-format/",
"date" : "2012-02-22 21:30:45 -0500",
"content" : "I installed factory_girl (2.6.0) for a project I am working on recently, andall of a sudden I started getting errors with RubyGems when I would try to runa rake task, such as:Invalid gemspec in[/opt/local/lib/ruby/gems/1.8/specifications/capistrano-2.11.2.gemspec]:invalid date format in specification: "2012-02-22 00:00:00.000000000Z"Invalid gemspec in[/opt/local/lib/ruby/gems/1.8/specifications/capistrano-2.9.0.gemspec]:invalid date format in specification: "2011-09-24 00:00:00.000000000Z"I went through the long process of uninstalling all my gems, running anupdate on RubyGems using ‘gem update –system’ and then reinstalling themagain via ‘bundle install’. Still the gems were uninstalled, yet thespecification errors were still occurring.I tried to run ‘gem cleanup’ or ‘gem pristine –all’ to get rid of the error.Eventually I was able to resolve the issue, but I don’t remember exactly how Idid it. Then today I go to work on another workstation, and the same thingoccurs. It turns out that the invalid strings just need to beremoved from the specification files.As the errors pointed to files in /opt/local/lib/ruby/gems/1.8/specifications/(I’m using MacPorts), running these commandsresolved the issue for me.cd /opt/local/lib/ruby/gems/1.8/specifications/sudo find . -type f | sudo xargs perl -pi -e 's/ 00:00:00.000000000Z//'"
} ,
{
"title" : "Deleting Git Branches in Remote Repository",
"category" : "",
"tags" : "git",
"url" : "/2012/01/deleting-git-branches-in-remote-repository/",
"date" : "2012-01-24 21:08:06 -0500",
"content" : "I had recently used a branch to handle all the modifications I was making to asystem for a Rails 3.1 upgrade from Rails 2.4.3. After I merged my changesback into the master branch, I deleted the ‘rails3’ branch locally, but itstill remained on the remote server.I found that ‘git push origin :branch_name’ will delete the repository fromthe remote server if the branch has been removed locally.$ git push origin :rails3To git@example.com:myrepo.git - [deleted] rails3"
} ,
{
"title" : "Rails 3 on WHM / cPanel VPS Server",
"category" : "",
"tags" : "capistrano, passenger, rails3.1, cpanel",
"url" : "/2012/01/rails-3-on-whm-cpanel-vps-server/",
"date" : "2012-01-08 16:14:58 -0500",
"content" : "cPanel is working towards making Rails 3 applications run natively withPassenger, setup via the cPanel interface. I’m not really sure if this will beideal, as most organizations deploy their apps to the server using Capistrano,not uploading via FTP or something.I’ve been hosting a number of PHP driven sites, including this blog, from ashared hosting service for quite a while now. Shared hosting is fine forpersonal websites or even small businesses with 4-5 page brochure stylewebsites that do not receive lots of traffic, but they’re not fine if slowperformance or intermittent downtime causes you to loose business (or even therespect of your visitors). In such cases I recommend a VPS, because youcontrol who you’re hosting and thus can ensure optimal uptime and performance.I highly recommend Linode as a VPS provider.I’ve been using the shared hosting for PHP/Wordpress sites, and a VPS to hostthe Ruby on Rails applications I’ve been working on. Really this is expensive,so I’m wanting to consolidate to one VPS for everything.I’ve been forewarned that cPanel does things it’s own way, so if you’re tryingto do something out-of-the-box you can run into issues. I’m aware of this, andthrough this article will let you know if setting up a Rails 3.1.3 hostingenvironment is possible with a WHM / cPanel server (RELEASE version 11.30.5,build 3). I plan on using Gitosis under an account to host repositories,Capistrano for deployment, and Passenger with Apache2 already provided bycPanel.Installing RubyTo stay within the “box” of the cPanel environment, I installed Ruby using thescript provided by cPanel:/scripts/installrubyGitosisTo setup Gitosis you have to first install Python tools.yum -y install python-setuptoolsNext, to install Git, you’ll have to use a special command because cPanel hasconfigured /etc/yum.conf to exclude certain packages, including perl packages,so that they do not break or conflict with the cPanel system. Use thefollowing command to install Git:yum --disableexcludes=main install gitFrom the root home directory, download and install Gitosis.cd /rootgit clone git://eagain.net/gitosis.gitcd gitosispython setup.py installNext create an account to host your Git repositories from the WHM interface.I’ve added a user with the user name ‘git’, and used ‘git.web-app-host.com’ asthe domain (a subdomain under my hosting service domain). Set the password toa very long secure password. You won’t be needing it again, as you’ll be usingan SSH key to authenticate.After you’re done creating the cPanel account which will host therepositories, copy your public key from your local machine to your root usershome direcotry ( /root/ ).scp ~/.ssh/id_rsa.pub root@vps.web-app-host.com:/rootGo back to your SSH session as ‘root’ on the server and run this command toinitialize the Gitosis repository under the ‘git’ user account.root@vps [~]# sudo -H -u git gitosis-init < /root/id_rsa.pubInitialized empty Git repository in /home/git/repositories/gitosis-admin.git/Reinitialized existing Git repository in /home/git/repositories/gitosis-admin.git/NOTE: Do not use the ‘Manage SSH Keys’ option from the cPanel for the Gitacccount, as this will remove the Gitosis-admin repository key from/home/git/.ssh/authorized_keys.Run the following command to make sure the post-update hook is executable. Ifthis isn’t done, then tasks performed by Gitosis after you commit an updatearen’t performed (i.e. creating new repositories).sudo chmod u+x /home/git/repositories/gitosis-admin.git/hooks/post-updateOn your local machine, run the following command to clone the Gitosis-adminrepository, used to manage your repositories on the server, to your localmachine.git clone git@<YOURSERVER>:gitosis-admin.gitThis should look like this:$ git clone git@vps.web-app-host.com:gitosis-admin.gitCloning into 'gitosis-admin'...stdin: is not a ttyremote: Counting objects: 5, done.remote: Compressing objects: 100% (5/5), done.remote: Total 5 (delta 0), reused 5 (delta 0)Receiving objects: 100% (5/5), done.Note: If you are prompted for a password when running this clone command, youlikely have some sort of SSH configuration not setup properly on your localmachine. If you’re using multiple keys with various hosts, check~/.ssh/config and make sure you’re using the proper syntax. Runssh git@<YOURSERVER> -v to get a verbose output of what’s happening when theSSH session is initialized to investigate further.On the remote machine, go ahead and delete your public key from the root usershome directory.rm /root/id_rsa.pubAfter cloning the Gitosis repository to your local machine, you simply modifyand commit changes to the ‘gitosis.conf’ file inside of the ‘gitosis-admin’folder.If you’re configuring new users, simply add their public SSH keys to the‘keydir’ folder with the ‘.pub’ file extension. Refer to these users using thefilename of the public key file without the ‘.pub’ extension.For instance I’ve added a repository called ‘marketsim’, and then added‘marketsim’ to the ‘writable’ setting for the gitosis-admin group.[gitosis][group gitosis-admin]writable = gitosis-admin marketsimmembers = jason@mymacbook.local[repo marketsim]gitweb = nodescription = Market Simulation Appowner = Jason Millerdaemon = noAlternatively I could create a new group with writable access to the‘marketsim’ repository.[gitosis][group gitosis-admin]writable = gitosis-adminmembers = jason@mymacbook.local[group marketsim-team]members = jason@mymacbook.localwritable = marketsim[repo marketsim]gitweb = nodescription = Market Simulation Appowner = Jason Millerdaemon = noI could add ‘newteammember.pub’ in the ‘keydir’ folder, then add‘newteammember’ after ‘jason@mymacbook.local’ separated by a space. This wouldmake another team member part of that group which has write access to therepository.After configuring a new repository, and giving my own local user write accessto that repository, I’ve push the changes via a commit to the remotegitosis-admin repository.NOTE: You may receive the warning:remote: WARNING:gitosis.gitweb.set_descriptions:Cannot find 'yourrepo' in '/home/git/repositories'Ignore this and continue.Now I’m going to initialize my new repository and push it to the remote server.$ cd marketsim$ git init .$ git add .$ git commit -m "initial commit"$ git remote add origin git@vps.web-app-host.com:marketsim.git$ git push origin masterstdin: is not a ttyInitialized empty Git repository in /home/git/repositories/marketsim.git/Counting objects: 3, done.Writing objects: 100% (3/3), 209 bytes, done.Total 3 (delta 0), reused 0 (delta 0)To git@vps.web-app-host.com:marketsim.git * [new branch] master -> masterDeploying to ServerI’ve created an account with the username ‘marketsi’ to host the deployedapplication (cPanel only allows up to 8 characters for the username). ThenI’ve logged into that account and added my public key via the SSH/Shell Access Manage SSH Keys section of the cPanel account.For property deployment you’ll need to install the ‘Bundler’ gem so that thedeployment script can install the gems needed for your application. You’llneed to install this as ‘root’ so that the ‘bundle’ script is available under/usr/bin/bundle.$ ssh root@vps.web-app-host.comroot@vps [~]# gem install bundlerFetching: bundler-1.0.21.gem (100%)Successfully installed bundler-1.0.211 gem installedInstalling ri documentation for bundler-1.0.21...Installing RDoc documentation for bundler-1.0.21...root@vps [~]# which bundle/usr/bin/bundleI’ve modified my deploy.rb file for my Rails application like so:require "bundler/capistrano"load "deploy/assets"############################################################## Settingsset :application, "marketsim"default_run_options[:pty] = true # Must be set for the password prompt from git to workset :use_sudo, falseset :user, "marketsi" # The server's user for deploysset :deploy_to, "/home/#{user}/rails"set :ssh_options, { :forward_agent => true }set :domain, "marketsim.org"server domain, :app, :webrole :db, domain, :primary => true############################################################## Gitset :scm, :gitset :repository, "git@vps.web-app-host.com:marketsim.git"set :branch, "master"set :deploy_via, :remote_cache############################################################## Passengernamespace :passenger do desc "Restart Application" task :restart do run "touch #{current_path}/tmp/restart.txt" endendafter :deploy, "passenger:restart"Now to run the script to setup the deployment directories on the remote server.cap deploy:setupThis created a folder under /home/marketsi/rails with the ‘releases’ and‘shared’ folder. Now I’ll actually deploy.cap deployThe gems were installed with no issue by Bundler for me. Hopefully the samegoes for you.PassengerThe next step is to configure Apache to serve the Rails application for mydomain name. Install the Passenger gem via SSH logged in as root:gem install passengerInstall Passenger using the Apache module installation command. Alldependencies should be found and displayed in green.passenger-install-apache2-moduleAt the end of the installation script it provides the Apache configurationsettings which you place in httpd.conf. Since we’re using cPanel, whichover-writes the Apache configuration when the EasyApache system is used torebuild Apache, PHP, and other modules, place this configuration in/usr/local/apache/conf/includes/pre_main_global.conf.# /usr/local/apache/conf/includes/pre_main_global.confLoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-3.0.11/ext/apache2/mod_passenger.soPassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-3.0.11PassengerRuby /usr/bin/rubyNext make a backup of the httpd.conf, run the configuration distiller script,rebuild, and then restart Apache.cp /usr/local/apache/conf/httpd.conf /usr/local/apache/conf/httpd.conf.bak-modrails/usr/local/cpanel/bin/apache_conf_distiller --update/scripts/rebuildhttpdconf/etc/init.d/httpd restartThe cPanel system makes the Apache Document Root for each account map to/home/username/public_html. Because of this you will need to remove the‘public_html’ directory, and then create a symlink from that directory to the‘public’ directory for your applications current release:rm -rf /home/marketsi/public_html/ln -s /home/marketsi/rails/current/public /home/marketsi/public_htmlchown marketsi:nobody public_html/chmod 750 public_html/Next add a .htaccess file in your application under the ‘public’ folder, andmake sure it contains ‘RailsBaseURI /’, as well as a directive with thePassengerAppRoot.RailsBaseURI /PassengerAppRoot /home/marketsi/rails/currentMySQL DatabaseIf your application returns an error page ‘We’re sorry, but something wentwrong.’, check the production.log file on the server. In my case, theapplication was running, but it couldn’t connect to the database with theexisting database.yml settings for production.As cPanel controls the MySQL databases and usernames, you’ll have to create adatabase manually via the cPanel, create the user and assign all privileges toit for the database, and then configure your database.yml appropriately."
} ,
{
"title" : "Configuring Rails 3.1.3 under Sub-URI",
"category" : "",
"tags" : "rails3.1, relative_url_root, sub-uri",
"url" : "/2012/01/configuring-rails-3-1-3-under-sub-uri/",
"date" : "2012-01-04 05:02:40 -0500",
"content" : "In setting up a new Rails app recently I was told that it needed to be servedunder the sub-URI of ‘/info’. I hadn’t done this before with a Rails app, andI expected that it could be tricky.I checked online to see how this is done and found references to the‘relative_url_root’ setting, but then shortly found that this has beendeprecated.I’m using Phusion Passenger with Apache 2, so I inserted the followingconfiguration into my VirtualHost entry for the site.Alias /info /home/myapp/current/public<Location /info> PassengerAppRoot /home/myapp/current RailsEnv production</Location>The Rails application was expecting requests from the root of the site still.I found a StackOverflow article where someone suggested configuring all theroutes under the “/info” scope like so:# config/routes.rbscope "/info" do root :to => "welcome#index" resources :postsendThis worked fine in development mode. I just made sure to pull up the pagesusing http://localhost:3000/info/I had tested this in production before building out the application further,and it seemed to work fine. Later on when I deployed after many updates, Ifound that the application was referencing precompiled assets using ‘/assets/’and not ‘/info/assets/’. Further investigation pointed out that I needed toconfigure assets with a prefix like so:# config/application.rbconfig.assets.prefix = "/info/assets"This seemed like it would help, but this caused assets to be precompiled under‘public/info/assets’.I had previously Googled for ‘rails 3.1 subdirectory path’, not registeringthat the more appropriate keyword is ‘sub URI’ for what I’m trying toaccomplish.It turns out that the best way to do this with Passenger is to mount theapplication to the sub-URI using the proper Apache configuration, and leave itup to the Rack interface to handle serving the requests to the Railsapplication, without using any Rails application configuration for the sub-URI.I removed the scoped routes, and the assets prefix, returning my applicationto it’s default configuration. I then created a symbolic link for ‘info’ inthe websites DocumentRoot, pointing to the ‘public’ folder of my applicationas per the Passenger documentation.ln -s /home/myapp/current/public /home/mysite/public_html/infoI added the appropriate configuration into my VirtualHost entry as advised bythe Passenger documentation, but this only worked for the root of the website,not my admin area I had wanted to serve from /info/admin/. All I got was a 404page from the main site.I found that the following configuration worked perfectly, with the symboliclink still needing to be present.Alias /info /home/myapp/current/public<Location /info> PassengerAppRoot /home/myapp/current RackEnv production RackBaseURI /info</Location>You might notice that I’m using ‘RackEnv production’ instead of‘RailsEnv production’. This is because I recall at some point previously thatRails 3.0 applications are Rack Applications, and thus you have to use‘RackEnv’ instead for Passenger to load the application in the properenvironment mode.Update 01/04/12: I’m using the TinyMCE-Rails plugin to provide a WYSIWYGeditor for one of the resources in my application. It’s not loading inproduction however (surprise, surprise). I checked into the precompiledJavascript file being served from the live server, and I see it includes thefollowing:window.tinyMCEPreInit=window.tinyMCEPreInit||{base:"/assets/tinymce",query:"3.4.7",suffix:""}The ‘base’ value should be ‘/info/assets/tinymce’. I see that in theTinyMCE-Rails gem the coding for this is:window.tinyMCEPreInit = window.tinyMCEPreInit || { base: '<%= Rails.application.config.assets.prefix %>/tinymce', query: '<%= TinyMCE::VERSION %>', suffix: ''};All call to ‘asset_path(‘rails.png’)’ in production returns‘/info/assets/rails-e4b51606cd77fda2615e7439907bfc92.png’ so Rails is honoringthe RackSubURI setting configured for Passenger. The following returns thesame path.The reason why this works is that the Rails asset helpers pull therelative_url_root value from the controller configuration, which must receiveit from the Rack configuration. When deploying the app using Capistrano, suchconfiguration is not present, and thus the correct path cannot be included inthe precompiled assets.The issue was reported via Git hub in issue #3259 and #2435, and afix will was implemented in Rails 3.2, with a possibly backport to Rails3.1.4.It appears that you’ll need to add a configuration to your application.rb orenvironments/production.rb, depending on how your development and productionenvironments are setup.# config/environments/production.rb# Use sub-uri in productionconfig.relative_url_root = '/info'Or possibly configure your deployment script to include the environmentvariable when precompiling.RAILS_RELATIVE_URL_ROOT="/info" bundle exec rake assets:precompileI wanted to test this, but I can’t seem to update to Rails 3.2, or install anew Rails app using edge Rails.$ rails new testapp --edge...Bundler could not find compatible versions for gem "actionpack": In Gemfile: sass-rails (>= 0) ruby depends on actionpack (~> 3.1.0) ruby rails (>= 0) ruby depends on actionpack (4.0.0.beta)I’m just going to configure my app under a subdomain instead of sub-uri fornow."
} ,
{
"title" : "Custom Rake Tasks Not Loading",
"category" : "",
"tags" : "rake",
"url" : "/2012/01/custom-rake-tasks-not-loading/",
"date" : "2012-01-01 21:57:34 -0500",
"content" : "I recently went to create a new Rake task under /lib/tasks in a Railsapplication I’m working on. I didn’t understand why the rake tasks weren’tshowing when I would run rake -T from the command line.When I’d try to run the task itself I would get a ‘Don’t know how to buildtask’ error.I just realized that I was naming my task file with the ‘.rb’ extension, andnot ‘.rake’. Doh!"
} ,
{
"title" : "Troubleshooting ActiveResource Requests",
"category" : "",
"tags" : "ActiveResource, HighRise, REST API",
"url" : "/2012/01/troubleshooting-activeresource-requests/",
"date" : "2012-01-01 21:46:52 -0500",
"content" : "I’m currently working on an app which integrates with the HighRise APIusing the Highrise Ruby wrapper gem. The classes defined by the gem relyon ActiveResource, the Ruby library included with Rails for interfacing withRESTful resources like the HighRise API.Sometimes I’m not sure if the requests being made via the commands I’m usingare making the right calls on the HighRise API.I found this thread on StackOverflow:How do I view the HTTP response to an ActiveResource request?. Thesolution I found helpful was this one onoverriding ActiveResource::Connection so that it outputs the debug output.This worked for me even with Rails 3.1.Just in case this article with the code disappears, here is the initializercode you can use to view the HTTP session data from the console. I’ve added anif statement to make sure the HTTP data is only outputted to the standardoutput in development mode, for security purposes, as well as requiresthe environment variable ‘HTTPDEBUG’.# /config/initializers/connection.rbclass ActiveResource::Connection # Creates new Net::HTTP instance for communication with # remote service and resources. def http http = Net::HTTP.new(@site.host, @site.port) http.use_ssl = @site.is_a?(URI::HTTPS) http.verify_mode = OpenSSL::SSL::VERIFY_NONE if http.use_ssl http.read_timeout = @timeout if @timeout # Here's the addition that allows you to see the output if Rails.env == 'development' &amp;&amp; ENV['HTTPDEBUG'] http.set_debug_output $stderr end return http endendYou can open the Rails console using ‘HTTPDEBUG=true rails c’ to activate theconsole with debugging output displayed in the console mode."
} ,
{
"title" : "Example Rake Task",
"category" : "",
"tags" : "rake",
"url" : "/2012/01/example-rake-task/",
"date" : "2012-01-01 21:26:28 -0500",
"content" : "Here is example rake task code which you can use and modify the next timeyou’re setting up new Rake tasks for a Rails app from scratch. The exampleincludes the syntax for setting the default task, running multiple tasks inorder, and a task which includes multiple arguments. This coding syntax workswith Rails 3.1.# /lib/tasks/mytask.rakenamespace :mytask do task :default => 'mytask:full_run' desc "run all tasks in proper order" task :full_run => [:example_task_one, :example_task_two] do puts "All tasks ran and completed at #{Time.now}" end desc "example task without arguments" task :example_task_one => :environment do # task code goes here puts "Example Task One completed" end desc "example task with arguments" task :example_task_two, [:arg1, :arg2] => :environment do |t, args| # if no arguments, display docs if args.count == 0 puts '' puts "rake mytask:example_task[arg1,arg2]" puts "arg1: description of first argument" puts "arg2: description of second argument" puts '' puts "example:" puts "rake mytask:example_task[123,'456']" puts '' else # task code goes here puts "Running task with arg1: #{args.arg1}, and arg2: #{args.arg2}" puts "Example Task Two completed" end endendSpecial thanks to Jason Seifer for his Rake Tutorial which made thispossible."
} ,
{
"title" : "Adding a New User in Ubuntu",
"category" : "",
"tags" : "",
"url" : "/2011/12/adding-a-new-user-in-ubuntu/",
"date" : "2011-12-29 21:08:48 -0500",
"content" : "Adding UserWhen setting up a new website manually on an Ubuntu server you need toestablish a user account with a home directory, and Bash shell access to theserver.useradd -m testuser -s /bin/bashAfter creating the account you’ll want to assign a password for the account.passwd testuser"
} ,
{
"title" : "Resolving issues with Namespaced Models in Rails 3.1.0",
"category" : "",
"tags" : "rails, namespaced models, rails3.1",
"url" : "/2011/12/resolving-issues-with-namespaced-models-in-rails-3-1-0/",
"date" : "2011-12-06 22:09:51 -0500",
"content" : "I recently was tasked with upgrading an application from Rails 2.3.8 to Rails I choose to upgrade it to Rails 3.1, because why upgrade once and then haveto do it again later.After upgrading and testing many points of the system locally, I was ready topush the upgraded application to the production server. After pushing it out Istarted to notice that a certain rake task was failing to run via a cronjob Ihad setup. This rake task worked with certain non-ActiveRecord models, which Ihad setup with a hierarchy which utilized inheritance and namespacing.Under Rails 2.3.8 I had a file, /app/models/rets_map/rets_map_base.rb. Theerror I was receiving from the rake task involved some sort of issue loadingthe class from this file.As an experiment I renamed the file as ‘base.rb’ so that it’s path was/app/models/rets_map/base.rb. I figured that Rails 3.1.0 expected thisnaming convention. I was right, kind of…$ rails console/Users/jason/Sites/rj4/rails/app/controllers/site_setup_controller.rb:120: warning: string literal in conditionLoading development environment (Rails 3.1.0)irb(main):001:0> RetsMap=> RetsMapirb(main):002:0> RetsMap::Base=> RetsMap::Baseirb(main):003:0> reload!Reloading...=> trueirb(main):004:0> RetsMap=> RetsMapirb(main):005:0> RetsMap::Base=> RetsMap::Baseirb(main):006:0> reload!Reloading...=> trueirb(main):007:0> RetsMap=> RetsMapirb(main):008:0> RetsMap::BaseLoadError: Expected /Users/jason/Sites/rj4/rails/app/models/rets_map/base.rb to define RetsMap::Base from /opt/local/lib/ruby/gems/1.8/gems/activesupport-3.1.0/lib/active_support/dependencies.rb:490:in `load_missing_constant' from /opt/local/lib/ruby/gems/1.8/gems/activesupport-3.1.0/lib/active_support/dependencies.rb:181:in `const_missing' from /opt/local/lib/ruby/gems/1.8/gems/activesupport-3.1.0/lib/active_support/dependencies.rb:179:in `each' from /opt/local/lib/ruby/gems/1.8/gems/activesupport-3.1.0/lib/active_support/dependencies.rb:179:in `const_missing' from (irb):8For some reason after I would tell the console to reload the models, I got anerror that the file didn’t define RetsMap::Base….but it did damnit!!The error was regarding ActiveSupport, so I tried to reinstall the Rails gemand ActiveSupport gem, but this didn’t help.The solution I had found was to upgrade to Rails 3.1.1. After doing this, theerror no longer occurred, even after I had reloaded the models.Ooh, also, I didn’t mention. Previously I was using the following command in/config/application.rb:config.autoload_paths += Dir["#{config.root}/app/models/**/"]As part of my troubleshooting I figured that by default Rails will try to loadmodels from subdirectories if they conform to the proper directory and filenaming required for namespaced models. When you add models to the autoloadpaths configuration, I think it expects that those conventions don’t apply..or maybe this just causes problems. I commented this line out and simply addedeach subdirectory that contained non-namespaced models.config.autoload_paths += %W(#{config.root}/app/models/email/)config.autoload_paths += %W(#{config.root}/app/models/sub/)config.autoload_paths += %W(#{config.root}/app/models/upload/)"
} ,
{
"title" : "Paperclip error with non-image file",
"category" : "",
"tags" : "paperclip, identify, imagemagick, non-image",
"url" : "/2011/12/paperclip-error-with-non-image-file/",
"date" : "2011-12-05 19:43:25 -0500",
"content" : "Recently I updated to Rails 3.1 from 2.3.8 for a project I’m working on.Paperclip version 2.4.5 was working perfectly well for me locally on my Mac OSX 10.7.2 laptop, with ImageMagick version 6.7.3-1.We had just launched the upgraded Rails 3.1 application to our productionserver, which went smoothly, but upon my checklist of pages to test (we don’thave test suite in use yet) I noticed that the file upload was failing.Further investigation showed that the object using paperclip was failing tosave:upload.errors.inspect:#["/tmp/stream20111205-28441-10oj2ck-0.csv is not recognized by the 'identify' command.", "/tmp/stream20111205-28441-10oj2ck-0.csv is not recognized by the 'identify' command."]}>, @base=#>Further investigation helped me identify this ‘identify’ command as anexecutable provided by ImageMagick. I ran this command on the temporary filewhich was located in /tmp, and I got an error such as the one that follows onboth my local machine and in production. The odd thing was that the fileupload worked locally, but was failing remotely.$ identify test-import20111205-1909-1xsbr19-0.csvidentify: no decode delegate for this image format 'test-import20111205-1909-1xsbr19-0.csv' @ error/constitute.c/ReadImage/532.This didn’t make sense. I was uploading a CSV file, not an image. I wasn’teven using the ‘:styles’ hash value in my model with the ‘has_attached_file’expression. Why was it checking to see what type of image the file is?!?!I tried to upgrade and reinstall ImageMagick on the production server, butthis had no effect. This made me suspect that paperclip was the cause of theissue. I also created an initalizer for paperclip under/config/initalizers/paperclip.rb pointing out the image_magick_path, which Ihad seen resolve other issues with ‘identify’ errors.if Rails.env == 'production' Paperclip.options[:image_magick_path] = "/usr/bin/"endThis didn’t help at all.I tried to reconfigure the application to use Paperclip version 2.4.4 and2.4.3, but that didn’t resolve the issue.Finally I updated my Gemfile to use the latest Github version of Paperclip. Ideployed this update, tested, and finally the issue was resolved.gem 'paperclip', "~> 2.4.5", :git => 'git://github.com/thoughtbot/paperclip.git'"
} ,
{
"title" : "Issues with RVM",
"category" : "",
"tags" : "rails, os x lion, rvm",
"url" : "/2011/11/issues-with-rvm/",
"date" : "2011-11-13 23:11:04 -0500",
"content" : "I recently decided to check out BrowserCMS, to evaluate how it work anddecide to use it…or RefineryCMS. I didn’t expect that BrowserCMS wouldrequire Ruby 1.9.2. I’ve been running with Ruby 1.8.6 or 1.8.7 for quite awhile now without any issues. It looks like it was time that I installRVM: Ruby Version Manager.I read through the documentation, followed the instructions to ensure thatprerequisite software was installed. I specifically made sure that certaincommands that it stated would be needed were all in my command line path under/opt/local, where the MacPorts all install. I try to maintain a command lineenvironment that’s almost entirely dependent on MacPorts. Prior to ensuringthis I ran into issues where software I’ve tried to install was using oneutility or library provided natively by Mac OS X, while using some otherutility or library I’ve installed separately, conflict due to differences andstop certain software from building properly upon install.I also took note that since I’m using MacPorts, I shouldconfigure my $HOME/.rvmrc file so that RVM uses my MacPort libraries whenbuilding gems.Installation of RubyThe installation of the RVM software itself went smoothly. I felt good knowingthat I was about to learn how RVM works and it will become another tool that Igladly assign to my arsenal of software to be used in the future (and add tomy resume).Then I ran into an issue with something in ‘readline.c’ when I attempted toinstall Ruby 1.8.7.jason@imac ~$ rvm install 1.8.7 --with-openssl-dir=/opt/localInstalling Ruby from source to: /Users/jason/.rvm/rubies/ruby-1.8.7-p352, this may take a while depending on your cpu(s)...ruby-1.8.7-p352 - #fetchingruby-1.8.7-p352 - #extracting ruby-1.8.7-p352 to /Users/jason/.rvm/src/ruby-1.8.7-p352ruby-1.8.7-p352 - #extracted to /Users/jason/.rvm/src/ruby-1.8.7-p352Applying patch 'stdout-rouge-fix' (located at /Users/jason/.rvm/patches/ruby/1.8.7/stdout-rouge-fix.patch)ruby-1.8.7-p352 - #configuringruby-1.8.7-p352 - #compilingERROR: Error running 'make ', please read /Users/jason/.rvm/log/ruby-1.8.7-p352/make.logERROR: There has been an error while running make. Halting the installation.jason@imac ~$ tail -10 /Users/jason/.rvm/log/ruby-1.8.7-p352/make.log...compiling readline/usr/bin/gcc-4.2 -I. -I../.. -I../../. -I../.././ext/readline -DHAVE_READLINE_READLINE_H -DHAVE_READLINE_HISTORY_H -DHAVE_RL_FILENAME_COMPLETION_FUNCTION -DHAVE_RL_COMPLETION_MATCHES -DHAVE_RL_DEPREP_TERM_FUNCTION -DHAVE_RL_COMPLETION_APPEND_CHARACTER -DHAVE_RL_BASIC_WORD_BREAK_CHARACTERS -DHAVE_RL_COMPLETER_WORD_BREAK_CHARACTERS -DHAVE_RL_BASIC_QUOTE_CHARACTERS -DHAVE_RL_COMPLETER_QUOTE_CHARACTERS -DHAVE_RL_FILENAME_QUOTE_CHARACTERS -DHAVE_RL_ATTEMPTED_COMPLETION_OVER -DHAVE_RL_LIBRARY_VERSION -DHAVE_RL_EVENT_HOOK -DHAVE_RL_CLEANUP_AFTER_SIGNAL -DHAVE_REPLACE_HISTORY_ENTRY -DHAVE_REMOVE_HISTORY -I/opt/local/include -D_XOPEN_SOURCE -D_DARWIN_C_SOURCE -I/opt/local/include -fno-common -arch x86_64 -g -Os -pipe -no-cpp-precomp -fno-common -pipe -fno-common -c readline.creadline.c: In function &lsquo;username_completion_proc_call&rsquo;:readline.c:730: error: &lsquo;username_completion_function&rsquo; undeclared (first use in this function)readline.c:730: error: (Each undeclared identifier is reported only oncereadline.c:730: error: for each function it appears in.)make[1]: *** [readline.o] Error 1make: *** [all] Error 1I tried to install ‘readline’ using MacPorts because I read online that thiswas a common issue with Mac OS X users under Snow Leopard. This didn’t help,and in fact I got an error after trying to install readline from source asrecommended in the forum post I found regarding the subject.After further search, I realized that the RVM documentation (which is prettygood I must say) has a page devoted to this issue.Issue Installing Custom GemsLater I had installed Ruby 1.9.2 as well, because I needed it for BrowserCMS.I was trying to install gems for Ruby, but each time it would give me the‘Invalid gemspec … invalid date format in specification’ error that I recallfrom a time when I was using a newer version of RubyGems than the gems I hadinstalled were made for, like:Invalid gemspec in [/opt/local/lib/ruby/gems/1.8/specifications/paperclip-2.4.3.gemspec]: invalid date format in specification: "2011-10-05 00:00:00.000000000Z"I made sure I was running under Ruby 1.9.2, and that the version of RubyGemswould be the one associated with Ruby version 1.9.2, and then upgraded it toversionimac:Sites jason$ rvm 1.9.2imac:Sites jason$ rvm gemdir/Users/jason/.rvm/gems/ruby-1.9.2-p290imac:Sites jason$ rvm rubygems currentRemoving old Rubygems files...Installing rubygems-1.8.10 for ruby-1.9.2-p290 ...Installation of rubygems completed successfully.I tried to check or change which version of gem I was using, and it would tellme it’s using one installed under RVM, but still it would give errors as if itwas using an older version of RubyGems.imac:Sites jason$ which gem/Users/jason/.rvm/rubies/ruby-1.9.2-p290/bin/gemimac:Sites jason$ sudo gem install browsercmsInvalid gemspec in [/opt/local/lib/ruby/gems/1.8/specifications/capistrano-2.9.0.gemspec]: invalid date format in specification: "2011-09-24 00:00:00.000000000Z"I then found out that I had a .gemrc file setup that was loading in my commandline environment, and thus interfering with the dependency loading which RVMprovides.imac:Sites jason$ cd ~imac:~ jason$ cat .gemrcgemhome: /opt/local/lib/ruby/gems/1.8gempath: - /opt/local/lib/ruby/gems/1.8 - /Users/jason/gemsimac:~ jason$I simply renamed this file as .gemrc.bak and this resolved the issue so thatgems I install under Ruby v1.9.2 will use the RubyGem version associated withthe Ruby 1.9.2 installation."
} ,
{
"title" : "Setting up Ubuntu for Rails App via Passenger",
"category" : "",
"tags" : "",
"url" : "/2011/10/setting-up-ubuntu-for-rails-app-via-passenger/",
"date" : "2011-10-30 01:11:11 -0400",
"content" : "These instructions apply to Ubuntu 11.04 (natty).Run the following to install Ruby, the Apache2 web server, Curl DevelopmentHeaders with SSL Support.sudo apt-get updatesudo apt-get install build-essentialsudo apt-get install ruby ruby1.8-devsudo apt-get install apache2 apache2-mpm-prefork apache2-prefork-dev libcurl4-openssl-devNext download Ruby Gems, extract the package, install Ruby Gems, and thencreate a symbolic link from /usr/bin/gem1.8 to /usr/bin/gemcd ~wget http://rubyforge.org/frs/download.php/75309/rubygems-1.8.10.tgztar -zxvf rubygems-1.8.10.tgzcd rubygems-1.8.10sudo ruby setup.rbln -s /usr/bin/gem1.8 /usr/bin/gemWith Ruby Gems installed we can install the Passenger gem.sudo gem install passengerNow we can run the installation of Passenger for Apache2.passenger-install-apache2-moduleAt the end of the build the installation instructions tell you to addsomething like the following to your Apache configuration file.LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-3.0.9/ext/apache2/mod_passenger.soPassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-3.0.9PassengerRuby /usr/bin/ruby1.8Do this by copying the commands, then pasting them into /etc/apache2/modsavailable/passenger.loadsudo nano /etc/apache2/mods-available/passenger.loadAfter saving the file, run the following commands to load the module andrestart the Apache web server.a2enmod passenger/etc/init.d/apache2 restartNow you can create a virtual host file for your site under/etc/apache2/sites-availablesudo nano /etc/apache2/sites-available/yoursite.comThen setup this virtualhost file to load and tell Apache to reload theconfigurationa2ensite yoursite.com/etc/init.d/apache2 reloadFor your application to run you’ll need to install the Rails gem of course.sudo gem install rails"
} ,
{
"title" : "Formtastic use of semantic_form_remote_for",
"category" : "",
"tags" : "",
"url" : "/2011/10/formtastic-use-of-semantic-form-remote-for/",
"date" : "2011-10-20 18:29:02 -0400",
"content" : "With the update to Formtastic version 2.0.0.rc1 the ‘semantic_remote_form_for’method was removed and support was added to ‘remote_form_for’ used like this:<%= semantic_form_for @contact, :remote => true, :url => '/contact' do |f| %>This is to conform with the new Rails 3 update where form_remote_for isreplaced with form_for with a remote hash value passed to it as an argument."
} ,
{
"title" : "Exporting Routes in Rails 3",
"category" : "",
"tags" : "",
"url" : "/2011/10/exporting-routes-rails/",
"date" : "2011-10-20 17:36:54 -0400",
"content" : "I’m currently upgrading a project I’m working on from Rails 2.3.8 to Rails 3.1.As part of this upgrade I need to test the entire application for issues,because we haven’t actually written any tests.To help with this, I’d like to export all the routes so that I can test themone by one, and keep track of what I’ve tested and fixed already.Typically I output the routes by using this command from the root of the Railsapp:rake routesIn this case I want to export all the routes into a format which I can post toour wiki as a table. This will require a custom script, but I’m not sure howRails 3 internally stores routes.As a start I found that rake will reveal the source of the ‘routes’ task usingthis command$ rake --where routesrake rails:upgrade:routes /Users/jason/myapp/vendor/plugins/rails_upgrade/lib/tasks/rails_upgrade_tasks.rake:27rake routes /opt/local/lib/ruby/gems/1.8/gems/railties-3.1.0/lib/rails/tasks/routes.rake:2So it appears that what I’m looking for is in/opt/local/lib/ruby/gems/1.8/gems/railties-3.1.0/lib/rails/tasks/routes.rake.I’ve modified this task and added it to my application under/lib/tasks/routes.rake like so:namespace :routes do desc 'Print out all defined routes in CSV format. Target specific controller with CONTROLLER=x.' task :csv => :environment do Rails.application.reload_routes! all_routes = Rails.application.routes.routes if ENV['CONTROLLER'] all_routes = all_routes.select do |route| route.defaults[:controller] == ENV['CONTROLLER'] end end routes = all_routes.collect do |route| reqs = route.requirements.dup reqs[:to] = route.app unless route.app.class.name.to_s =~ /^ActionDispatch::Routing/ reqs = reqs.empty? ? "" : reqs.inspect {:name => route.name.to_s, :verb => route.verb.to_s, :path => route.path, :controller => route.requirements[:controller], :action => route.requirements[:action]} end # Skip the route if it's internal info route routes.reject! { |r| r[:path] =~ %r{/rails/info/properties|^/assets} } # name_width = routes.map{ |r| r[:name].length }.max # verb_width = routes.map{ |r| r[:verb].length }.max # path_width = routes.map{ |r| r[:path].length }.max puts "controller,action,method,path,name" routes.each do |r| puts "#{r[:controller]},#{r[:action]},#{r[:verb]},#{r[:path]},#{r[:name]}" end endendI can create a CSV file on my desktop, which contains all my routes by simplyrunning this command now:rake routes:csv > ~/Desktop/rake-routes.csv"
} ,
{
"title" : "Using URL Helpers in Models or Rake Tasks",
"category" : "",
"tags" : "",
"url" : "/2011/10/using-url-helpers-models-or-rake-tasks/",
"date" : "2011-10-19 19:41:51 -0400",
"content" : "If for some reason you need to use URL helpers which are based on the routesyou’ve defined in Rails 3.1, simply add the following to the model method orrake task:include Appname::Application.routes.url_helpersMake sure you replace ‘Appname’ with the name of your app, which should be thesame name as the root folder for your application. You can also obtain it from/config/application.rb where it is defined like so:module Appname class Application < Rails::Application endendIf you’re needing to render a partial inside of a rake task with Rails 3, youcould try using the solution suggested in this article. I had to do thisand I pulled it off by adding the include above to the OfflineTemplate class,and then using this code:Appname::Application.routes.default_url_options = { :host => 'www.mydomain.com'}template = OfflineTemplate.newpartial_results_html = template.render_to_string( :partial => "shared/search_results_email_html", :object => search_object, :format => :html)"
} ,
{
"title" : "Building a Query String from a Hash with Rails 3",
"category" : "",
"tags" : "",
"url" : "/2011/10/building-query-string-from-hash-rails/",
"date" : "2011-10-19 17:35:09 -0400",
"content" : "I have a model has a method that generates and stores a cached link to it’sown view for use in mailers. Under Rails 2 the method which generated thislink created the beginning of the URL based on the owner of the object (thisis a multi-domain system I’m working on), however it required that a hash ofparameters be included in the link.Under Rails 2 the ‘options’ hash would be passed to build_query_string insideof my method like so:params = ActionController::Routing::Route.new.build_query_string(options)Under Rails 3 I receive this error:wrong number of arguments (0 for 7)It turns out that under Rails 3 the Hash library includes a to_query method.irb(main):001:0> h = {:blah => '1', :blah2 => '2'}=> {:blah=>"1", :blah2=>"2"}irb(main):002:0> h.to_query=> "blah2=2&amp;blah=1"Thanks to mjrussel for posting this on StackOverflow."
} ,
{
"title" : "Rails 3 Autoloading with Namespaced Models",
"category" : "",
"tags" : "",
"url" : "/2011/10/rails-autoloading-namespaced-models/",
"date" : "2011-10-18 18:50:03 -0400",
"content" : "I’m working through an upgrade from Rails 2.3.8 to Rails 3.1, and a set ofname spaced models that I setup are giving errors when I run a certain Raketask that relies on them. I looked into the issue and it appears that I needto learn the way Rails 3 loads models.In /config/application.rb I’m using the following line to auto load allmodels from /app/models and any subdirectories.config.autoload_paths += Dir["#{config.root}/app/models/**/"]For this to work I found that I needed to understand how Rails 3 interpretsthe folder and filenames. Some models weren’t even registering as available,so I created folders for each class and put their files under each folder.This seemed to resolve some errors, but then I was receiving errors when Iwould try to instantiate a new object from one of the defined classes in theRails console:NoMethodError: undefined method `new' for RetsMap::Sys_Local::Res_property::Class_1:ModuleIn my case RetsMap::Sys_Local::Res_property inherits from RetsMap::Base, andRetsMap::Sys_Local::Res_property::Class_1 inherits fromRetsMap::Sys_Local::Res_property. I then tried to see why I was receiving thismention of ‘Module’ at the end of my ‘Class_1’ by starting with RetsMap::Base.irb(main):001:0> RetsMap.class=> Moduleirb(main):002:0> RetsMap::Base=> RetsMap::Baseirb(main):003:0> RetsMap::Base.class=> ModuleSure enough this showed that Rails was interpreting my class as a module.Since these sub-classes inherit from RetsMap::Base, which is loading as aModule, I figured that I needed to start there. I moved the file which definedthe class from /app/models/rets_map/base/ folder to/app/models/rets_map/base.rb and this caused the class to load as a class.irb(main):005:0> RetsMap.class=> Moduleirb(main):006:0> RetsMap::Base.class=> ClassI take it that this means that the folders are meant to store modules, withthe files representing the classes. At the same time, a subdirectory must becreated for each subsequent namespace or else you’ll receive an error that itwas expecting the file to define itself in the namespace of the folder itbelongs to. If you create a file in a directory with the same name as thedirectory, it can define a class for that namespace instead of being loaded asa module."
} ,
{
"title" : "Form Fields not Displaying with Formtastic",
"category" : "",
"tags" : "",
"url" : "/2011/10/form-fields-not-displaying-formtastic/",
"date" : "2011-10-18 13:48:34 -0400",
"content" : "The Rails project I’m currently working on uses Formtastic, a Rails formbuilder plugin. The projects description says “Formtastic is a RailsFormBuilder DSL (with some other goodies) to make it far easier to createbeautiful, semantically rich, syntactically awesome, readily stylable andwonderfully accessible HTML forms in your Rails applications.” I wasn’t surewhat DSL means, but found in the projects wiki on the About page that itstands for Domain Specific Language.So I’m currently upgrading from Rails 2.3.8 to Rails 3.1, and I found that theform fields were not showing for the pages using the Formtasticsemantic_form_for code blocks. I updated my own code so that the equal sign isincluded after the opening Ruby code tag in the views such as:<%= semantic_form_for @product do |f| %>… instead of …<% semantic_form_for @product do |f| %>This was one of the first steps I performed as advised by the rails 3 upgradeinstructions. I just now realized however that inside of the code block areother code blocks for the form fields and the submit button. These too mustinclude the equal sign, which was the reason why my form fields were notdisplaying.<%= f.inputs :name => "Author", :for => :author do |author_form| %> <%= author_form.input :first_name %> <%= author_form.input :last_name %><% end %><%= f.buttons do %> <%= f.commit_button %><% end %>"
} ,
{
"title" : "Adding Event Listeners to Google-Maps-for-Rails Markers",
"category" : "",
"tags" : "",
"url" : "/2011/10/adding-event-listeners-to-google-maps-for-rails-markers/",
"date" : "2011-10-11 14:24:27 -0400",
"content" : "I’m currently working on a Ruby on Rails project that was setup under Rails 2,and we’re upgrading it to Rails 3 via a separate branch.We were using a gem called Eschatonto provide Google Maps on certain pages of the website, but it appears thatEschaton is abandoned and actually going to be deleted on October 31, 2011.Because of this I forked the project under my own Github account.After looking around for a suitable replacement we decided onGoogle-Maps-for-Rails, version (1.3.0). Previously with Eschaton we hadjQuery code associated with event listeners that were triggered when themarkers on the map are selected. When selected the items being plotted on themap, which are displayed to the left of the map, were highlighted using CSSafter being clicked on, and page also scrolled down to the item selected onthe map. To reproduce this we need to tie events to the markers using GoogleMaps for Rails (gmaps4rails).After some searching and testing I found this to be the solution. The‘content_for :scripts’ container ensures that the Javascript is inserted inthe bottom of the page via the ‘<%= yield :scripts %>’ that is placed inyour layout. The Javascript is also outputted at the bottom of the page AFTERthe Javascript which the gem outputs that initializes and loads the Google Map.<%= gmaps4rails(@json) %><% content_for :scripts do %> <script type="text/javascript" charset="utf-8"> Gmaps.map.callback = function() { for (var i = 0; i < Gmaps.map.markers.length; ++i) { google.maps.event.addListener(Gmaps.map.markers[i].serviceObject, 'click', function(object) { alert('lat: '+object.latLng.Na+' long: '+object.latLng.Oa); }); } } </script><% end %>After further coding I found that I needed to reference the MySQL ID of eachof the objects, which is included in the ID of each element on the page. Iupdated my controller to use the following so that the MySQL ID would beavailable in the JSON:@json = @properties.to_gmaps4rails do |property| "\"id\": \"#{property[0].id}\""endI then needed to figure out how to pass this ID to the function that is passedto the ‘google.maps.event.addListener’. I did this like so:<%= gmaps4rails(@json) %><% content_for :scripts do %> <script type="text/javascript" charset="utf-8"> Gmaps.map.callback = function() { for (var i = 0; i < Gmaps.map.markers.length; ++i) { var marker = Gmaps.map.markers[i].serviceObject marker.marker_id = Gmaps.map.markers[i].id; google.maps.event.addListener(marker, 'click', function() { alert('marker id: '+this.marker_id); }); } } </script><% end %>"
} ,
{
"title" : "Issues with Bluetooth in OS X Lion after Upgrade",
"category" : "",
"tags" : "",
"url" : "/2011/09/issues-with-bluetooth-in-os-x-lion-after-upgrade/",
"date" : "2011-09-28 22:08:39 -0400",
"content" : "I recently upgraded from OS X Lion (10.7.1) from Snow Leopard so that I couldbe up-to-date and benefit from any good features. Truthfully I liked the oldExpose and Spaces better than this new Mission Control interface for virtualdesktops…but oh well it will do.After upgrading to Lion the one thing that has been most frustrating is thatmy Bluetooth headset doesn’t work anymore. It syncs up with my MacBook Pro,but it doesn’t send audio from the mic nor does it playback audio. I’m using aVXI BlueParrott B250-XT.I’m referring to the instructions on this page to resolve the issue whichare: Don’t have Power Management configuration issue causing bluetooth option tonot be present. Unplugged USB devices Ran Applications > Utilities > Disk Utility and ran a ‘Repair DiskPermissions’ on my primary drive. Deleted user preference files for Bluetooth(~/Library/Preferences/com.apple.Bluetooth and~/Library/Preferences/com.apple.Bluetooth.plist) Reset PRAM Reset Power Management System Opened Bluetooth Explorer -> Utilities Menu -> Modify Software andDevice Configuration -> Checked: ‘Recant’ Connected Apple HID Devices ‘Full Factory Reset’ connected Apple HID devices Delete link keys from the Bluetooth module Delete global bluetooth preference file (/L/P/com.apple.Bluetooth.plist) Remove all favorite devices Unconfigure all ‘configured’ devices Restart the Bluetooth daemon (‘blued’) process This all did not resolve my issue. I then thought to myself, well…the deviceis pairing and connecting. Perhaps it’s a sound issue. I then proceeded tofind some sort of forum post or article explaining how to reset thesound/audio settings which might be corrupted since upgrading to OS X Lion….I’ll update this post once I troubleshoot further."
} ,
{
"title" : "Redirect_to not working",
"category" : "",
"tags" : "http post",
"url" : "/2011/09/redirect_to-not-working/",
"date" : "2011-09-27 19:37:59 -0400",
"content" : "I was just working on a Ruby on Rails controller method that receivesinformation from the previous form via HTTP POST. I coded it so that ifcertain form variables weren’t present it would set a flash message andredirect to the form page. I tried and tried and still the redirect wasn’tworking. I reset my web server, and even restarted my computer, but stil thisdidn’t resolve the issue.I then realized that perhaps redirects aren’t possible with HTTP POST’s, onlyGET requests. I ended up just creating a generic view for displaying errors,and will render that view and then ‘return FALSE’ inside of the if statementwhen an error is detected."
} ,
{
"title" : "Advanced Use of Will_Paginate",
"category" : "",
"tags" : "rails, pagination, will_paginate",
"url" : "/2011/09/advanced-use-of-will_paginate/",
"date" : "2011-09-23 21:08:56 -0400",
"content" : "I’m building an index of contacts, displayed with paginated links provided by will_paginate.The wiki for this plugin advises you on how to do setup your controllermethod, and what to put in the view to obtain a simple set of links, such as:# /app/controllers/contact_controller.rbdef index @contacts = Contact.paginate :page => params[:page], :per_page => 10, :order => 'created_at DESC'end<!-- /app/views/contact/index.html.erb --><%= will_paginate @contacts %>What I’m not finding however are more advanced methods of using thewill_paginate plugin. Here are a few things I’ve found.Links at top and bottom of paginated sectionYou can add the paginated links to the top and bottom of your paginatedsection using this syntax:<% paginated_section @contacts do %> <table id="contacts"> <%= render(:partial => "contact_row", :collection => @contacts) %> </table><% end %>Display Page Entries InfoYou can display text on your page such as ‘Displaying contacts11 - 12 of 12 in total’ by using the following view helper.<%= page_entries_info @contacts %>Current and Total Page NumberIf you want to display the total number of pages in your own way, you can dothis by using the ‘total_pages’ method of the paginated collection.You are viewing page <%= @contacts.current_page %>of <%= @contacts.total_pages %>"
} ,
{
"title" : "Getting File object for Paperclip Attachment via S3",
"category" : "",
"tags" : "rails, csv, paperclip, s3",
"url" : "/2011/09/getting-file-object-for-paperclip-attachment-via-s3/",
"date" : "2011-09-22 18:34:10 -0400",
"content" : "I’m working on a project where we are using the Paperclip plugin for Ruby onRails for file handling and associations with other models.I’m working on a CSV import option right now, using this tutorial to helpme get a head start on how to break the contents of the file up into rows andcolumns. I’m not passing the file directly from a form to the controllermethod, but I’m opening the file that has already been saved after beinguploaded via AJAX.I couldn’t find out how I would get the file itself into an object that I canpass to the parser like so:@parsed_file=CSV::Reader.parse(params[:dump][:file])It turns out that you simply need to refer to the Paperclip attachment andthen use ‘to_file’ like so:require 'csv'@upload = Upload.find(params[:upload_id])@parsed_file = CSV::Reader.parse(@upload.attachment.to_file)"
} ,
{
"title" : "Issues with MacPorts After Upgrading to OS X Lion",
"category" : "",
"tags" : "macports",
"url" : "/2011/09/issues-with-macports-after-upgrading-to-os-x-lion/",
"date" : "2011-09-16 21:10:00 -0400",
"content" : "I realized this morning that I was having dependency issues with ImageMagickon my Mac, which I installed using MacPorts. I had recently upgraded toMac OS X Lion, so it made sense that I needed to update the software toresolve the issues, much like I had when I upgraded to Snow Leopard.I found this article that provided steps for migrating MacPorts for Lion,but I kept getting this error when I tried to uninstall all the packages:warning: Failed to execute portfile from registry for apache2@2.2.17_1+preforkmpm too many nested evaluations (infinite loop?)Warning: Failed to execute portfile from registry for apache2@2.2.17_1+preforkmpm too many nested evaluations (infinite loop?)Warning: Failed to execute portfile from registry for apache2@2.2.17_1+preforkmpm too many nested evaluations (infinite loop?)Warning: Failed to execute portfile from registry for apache2@2.2.17_1+preforkmpm too many nested evaluations (infinite loop?)I searched and searched for a solution, and even tried to uninstall apache2@2.2.17_1+preformkmpm, but it told me that apache2 @2.2.17_1+preformkmpmdepends on itself, and that I should uninstall it. Obviously that wasn’tpossible.Finally I found the migration guide provided by the MacPorts website,which instructed me to install the new MacPorts for OS X Lion before removingand reinstalling the packages. I downloaded the DMG file for MacPorts(MacPorts-2.0.3-10.7-Lion), installed it, and then the packages uninstalledwithout any issues."
} ,
{
"title" : "Error: 'unintitialized constant MySQL' with Rails 3 on Snow Leopard Mac",
"category" : "",
"tags" : "",
"url" : "/2011/05/error-unintitialized-constant-mysql-with-rails-3-on-snow-leopard-mac/",
"date" : "2011-05-20 02:05:08 -0400",
"content" : "I just installed Rails 3 on my iMac, which is running Snow Leopard. I’m tryingto build a web hosting website/billing system/management system. I configuredthe app to use MySQL in /config/database.yml like so:development: adapter: mysql encoding: utf8 database: hosting_development username: root password: host: 127.0.0.1I had to do this because I created the Rails app without specifying that Ididn’t want sqlite3. I then ran rake db:create and I got this error:rake aborted!uninitialized constant MysqlI found an article online which advised to do the follow, with theexception of the path to mysql_config. I’m using MacPorts for most of thepackages in my development environment.$ sudo gem uninstall mysql$ sudo env ARCHFLAGS="-arch x86_64" gem install mysql ----with-mysql-config=/opt/local/lib/mysql5/bin/mysql_configI then modified the Gemfile in my app with the following. I think this mighthave been what was needed most, not the gem reinstallation. I forgot to dothis until right before it was fixed.# gem 'sqlite3'gem 'mysql'"
} ,
{
"title" : "Installing PHPdoc for Ubuntu for use with Command Line",
"category" : "",
"tags" : "",
"url" : "/2011/05/installing-phpdoc-for-ubuntu/",
"date" : "2011-05-18 23:21:45 -0400",
"content" : "I wanted to install PhpDocumentor for use on my server so that I couldgenerate documentation from the command line. I found this article, whichinstructed me to somehow change the PEAR setting for data_dir. I installedPhpDocumentor in my web root and it just didn’t work and gave me a bunch oferrors in the browser.I uninstalled the package via PEAR (ala ‘pear uninstall phpdocumentor’command), then deleted the folder. I just wanted to install it in the properpath so that it’s available from the command line using ‘phpdoc’.I downloaded the source of PhpDocumentor directly from SourceForge, and it hadphpdoc available in the folder, but I’m not sure which folder to put it in sothat it’s in my path for the command line.Finally I just reset the ‘data_dir’ setting back to the default by using thiscommand:sudo pear config-set data_dir /usr/share/php/dataNext I installed the package again via PEAR, just like I should have originallydone.sudo pear upgrade PhpDocumentorThis worked like a charm.$ which phpdoc/usr/bin/phpdoc"
} ,
{
"title" : "Ruby on Rails session - Access from PHP",
"category" : "",
"tags" : "",
"url" : "/2010/12/ruby-on-rails-session-access-from-php/",
"date" : "2010-12-14 03:25:37 -0500",
"content" : "If you need to access a Ruby on Rails session from a PHP application runningunder the same domain, you can do this by splitting the string in the cookieby the ‘–’. Thanks to Frederick Cheung for pointing this out.Here is coding I added to my PHP script which was running from a path underthe same domain. Unfortunately the data returned is in Marshal format, andthere isn’t a Marshal.load function for PHP to get the values easily.$raw_session_string = $_COOKIE['_app_session'];$data_split = explode ('--' , $raw_session_string);$encoded_session = $data_split[0];$marshal_data = base64_decode($encoded_session);echo "<pre>marshal_data:". print_r($marshal_data,1) ."</pre>\n";"
} ,
{
"title" : "Obtaining Request Domain Name for Ruby on Rails",
"category" : "",
"tags" : "",
"url" : "/2010/10/obtaining-request-domain-name-for-ruby-on-rails/",
"date" : "2010-10-12 04:19:16 -0400",
"content" : "I’m using Rails 2.3.8. To obtain the domain name for the website beingrequested (i.e. mysite.com, mysite.net), just reference ‘request.host’.ruby@host = request.hostYou can only reference request.host in the views, or the controller."
} ,
{
"title" : "Changing Column Order via ActiveRecord Migration",
"category" : "",
"tags" : "",
"url" : "/2010/08/changing-column-order-via-activerecord-migration/",
"date" : "2010-08-16 01:09:40 -0400",
"content" : "Is it possible to change the order of the columns in your MySQL (or otherdatabase) table using a migration? Lets see.If you check the ActiveRecord::Migration documentation you’ll see there isa method called ‘change_column’ which accepts various options.<tt>change_column(table_name, column_name, type, options)</tt>:Changes the column to a different type using the same parameters as add_columnAs of Rails 2.3.6 this is now available by using the :after option. You’llhave to include the field type, even though you are not modifying the type.Example:change_column :orders, :tax_rate, :float, :after => :tax_state"
} ,
{
"title" : "Rails Performance Statistics",
"category" : "",
"tags" : "",
"url" : "/2010/08/rails-performance-statistics/",
"date" : "2010-08-09 05:26:40 -0400",
"content" : "Again, as I search for things, I stumble onto new tools. I just found outabout a tool for monitoring the performance of Java and Ruby applicationscalled New Relic.They provide a free service level for Startups and Students even."
} ,
{
"title" : "RailRoad Gem",
"category" : "",
"tags" : "diagram, models, activerecord",
"url" : "/2010/08/railroad/",
"date" : "2010-08-09 05:07:44 -0400",
"content" : "I just discovered that there is a Ruby gem which generates diagrams based onRails models (ActiveRecord). I ran across this website a while back, butdidn’t quite connect the dots. I was just reading an article on placingmodels into their own namespace, and I realized that the diagram it uses as anexample was generated using RailRoad.[http://railroad.rubyforge.org/]"
} ,
{
"title" : "Annotate Models",
"category" : "",
"tags" : "rails, plugin",
"url" : "/2010/08/annotate-models/",
"date" : "2010-08-08 01:02:30 -0400",
"content" : "There is a rails plugin which adds schema information for the models incomments at the top of your model definition files. It’s really useful. Checkout the instructions on installing and using this plugin at:[http://pragdave.pragprog.com/pragdave/2006/02/annotate_models.html]"
} ,
{
"title" : "Selenium RC, Firefox 3, and Ubuntu",
"category" : "",
"tags" : "linkedin",
"url" : "/2010/07/selenium-rc-firefox-3-and-ubuntu/",
"date" : "2010-07-29 21:46:26 -0400",
"content" : "I’ve got a system setup which uses Firefox on an Ubuntu machine, with theSelenium RC server (remote control). I had a set of scripts which would runautomatically every 15 minutes, which would prompt Firefox to open and go tothe site and submit certain forms. This stopped working after I ran an updateon some packages in my Ubuntu machine (9.04 Jaunty).I was able to resolve this issue by upgrading fromSelenium RC 1.0.1 to 1.0.3."
} ,
{
"title" : "Undefined method 'ref' for ActiveSupport::Dependencies:Module",
"category" : "",
"tags" : "",
"url" : "/2010/07/unable-to-run-rails-migrations-mysql-gem-on-snow-leopard/",
"date" : "2010-07-27 13:19:01 -0400",
"content" : "After upgrading to Snow Leopard, and trying to run ‘rake db:migrate’, Ireceived this error once. This seems common to others which have upgraded,especially back when Snow Leopard was released in August of 2009:rake aborted!uninitialized constant MysqlCompat::MysqlRes(See full trace by running task with --trace)I’ve tried to troubleshoot by reinstalling the MySQL gem, and the 64 bitversion of the MySQL server. I’m no longer receiving this error above.When installing the MySQL Gem, I receive a bunch of errors, unless I specifyto not install documentation.$ sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/lib/mysql5/bin/mysql_configBuilding native extensions. This could take a while...Successfully installed mysql-2.8.11 gem installedInstalling ri documentation for mysql-2.8.1...No definition for next_resultNo definition for field_nameNo definition for field_tableNo definition for field_defNo definition for field_typeNo definition for field_lengthNo definition for field_max_lengthNo definition for field_flagsNo definition for field_decimalsNo definition for time_inspectNo definition for time_to_sNo definition for time_get_yearNo definition for time_get_monthNo definition for time_get_dayNo definition for time_get_hourNo definition for time_get_minuteNo definition for time_get_secondNo definition for time_get_negNo definition for time_get_second_partNo definition for time_set_yearNo definition for time_set_monthNo definition for time_set_dayNo definition for time_set_hourNo definition for time_set_minuteNo definition for time_set_secondNo definition for time_set_negNo definition for time_set_second_partNo definition for time_equalNo definition for error_errnoNo definition for error_sqlstateInstalling RDoc documentation for mysql-2.8.1...No definition for next_resultNo definition for field_nameNo definition for field_tableNo definition for field_defNo definition for field_typeNo definition for field_lengthNo definition for field_max_lengthNo definition for field_flagsNo definition for field_decimalsNo definition for time_inspectNo definition for time_to_sNo definition for time_get_yearNo definition for time_get_monthNo definition for time_get_dayNo definition for time_get_hourNo definition for time_get_minuteNo definition for time_get_secondNo definition for time_get_negNo definition for time_get_second_partNo definition for time_set_yearNo definition for time_set_monthNo definition for time_set_dayNo definition for time_set_hourNo definition for time_set_minuteNo definition for time_set_secondNo definition for time_set_negNo definition for time_set_second_partNo definition for time_equalNo definition for error_errnoNo definition for error_sqlstateI’ve realized that these errors don’t occur if you try to install withoutdocumentation included.$ sudo env ARCHFLAGS="-arch x86_64" gem install mysql --no-ri --no-rdoc ----with-mysql-config=/opt/local/lib/mysql5/bin/mysql_configBuilding native extensions. This could take a while...Successfully installed mysql-2.8.11 gem installedI doubted if this really was a successful installation due to the errors whenincluding the documentation, but this isn’t the case.You’ll notice that I’m running MySQL from the installation location in/opt/local/lib/mysql5/. This is because I’m using MacPorts, and I definitely donot want to try to run from a mixed environment with some executables and gemsbeing installed under my system folders and library locations, and othersunder the location of the folders where MacPorts installs these things. Why?Because I read that some people have had problems with 32 bit versions of somelibraries and executables still being installed and this causing dependencyconflicts. I’m needing certain packages provided by MacPorts, so I’m trying toensure that my entire environment is based on MacPort packages.jason-imac:bin jason$ file `which ruby`/opt/local/bin/ruby: Mach-O 64-bit executable x86_64jason-imac:bin jason$ file `which mysql`/opt/local/bin/mysql: Mach-O 64-bit executable x86_64I’ve completely uninstalled all the gems I had installed. I’m only working onone new project, so luckily I have no dependencies on all these old gems. Ieven removed all the gems which came with the system by default. Now all thegems are installed based on the MacPorts version of Ruby I’m running.$ sudo gem list -d | grep Installed Installed at: /opt/local/lib/ruby/gems/1.8 Installed at: /opt/local/lib/ruby/gems/1.8 Installed at: /opt/local/lib/ruby/gems/1.8 Installed at: /opt/local/lib/ruby/gems/1.8 Installed at: /opt/local/lib/ruby/gems/1.8 Installed at: /opt/local/lib/ruby/gems/1.8 Installed at: /opt/local/lib/ruby/gems/1.8 Installed at: /opt/local/lib/ruby/gems/1.8 Installed at: /opt/local/lib/ruby/gems/1.8 Installed at: /opt/local/lib/ruby/gems/1.8 Installed at: /opt/local/lib/ruby/gems/1.8 Installed at: /opt/local/lib/ruby/gems/1.8When I run a migration I still received an error. I doubted that the MySQL gemis installed properly, but this I can’t be completely sure of. No one else hasreported this error, so I’m really stuck.$ rake db:migrate(in /Users/jason/rj4)rake aborted!undefined method `ref' for ActiveSupport::Dependencies:Module(See full trace by running task with --trace)Finally I realized that this error was the result of some issue with theDevise gem which my co-worker installed for user authentication. I found thisGoogle Groups posting about the same error.I installed devise version 1.0.8 and this resolved the issue.$ sudo gem install devise -v=1.0.8Successfully installed devise-1.0.81 gem installedInstalling ri documentation for devise-1.0.8...Installing RDoc documentation for devise-1.0.8...$ rake db:migrate(in /Users/jason/rj4)rake aborted!Unknown database 'rj4_development'(See full trace by running task with --trace)$ rake db:create(in /Users/jason/rj4)$ rake db:migrate(in /Users/jason/rj4)== CreateProperties: migrating ===============================================-- create_table(:properties) -> 0.0712s== CreateProperties: migrated (0.0713s) ======================================== CreateUsers: migrating ====================================================-- create_table(:users) -> 0.0967s== CreateUsers: migrated (0.0969s) ==========================================="
} ,
{
"title" : "Setting up Deployment for Rails using Capistrano, Apache with Passenger and Git",
"category" : "",
"tags" : "rails, capistrano, git, passenger",
"url" : "/2010/07/setting-up-deployment-for-rails-using-capistrano-apache-with-passenger-and-git/",
"date" : "2010-07-21 02:48:23 -0400",
"content" : "I don’t have time right now to learn how to setup Capistrano. I just want arecipe that works and does the job. Here are my notes. First install the Capistrano gemsudo gem install capistrano Next you need to go into the directory of your Ruby on Rails applicationand capify it:capify . Next I recommend this article (I’ll rip off the deploy.rb soon and post ithere)Capistrano Deploy with Git and Passenger Once you’ve configured your deploy.rb, run this command to have it setupthe directories on the remote server (releases, shared, logs, etc).cap deploy:setup Next run this command to get the list of other capistrano commands you canrun:cap -TThe output should look likecap deploy # Deploys your project.cap deploy:check # Test deployment dependencies.cap deploy:cleanup # Clean up old releases.cap deploy:cold # Deploys and starts a `cold' application.cap deploy:migrate # Run the migrate rake task.cap deploy:migrations # Deploy and run pending migrations.cap deploy:pending # Displays the commits since your last deploy.cap deploy:pending:diff # Displays the `diff' since your last deploy.cap deploy:restart # Restarting mod_rails with restart.txtcap deploy:rollback # Rolls back to a previous version and restarts.cap deploy:rollback:code # Rolls back to the previously deployed version.cap deploy:setup # Prepares one or more servers for deployment.cap deploy:start # start task is a no-op with mod_railscap deploy:stop # stop task is a no-op with mod_railscap deploy:symlink # Updates the symlink to the most recently deployed ...cap deploy:update # Copies your project and updates the symlink.cap deploy:update_code # Copies your project to the remote servers.cap deploy:upload # Copy files to the currently deployed version.cap deploy:web:disable # Present a maintenance page to visitors.cap deploy:web:enable # Makes the application web-accessible again.cap invoke # Invoke a single command on the remote servers.cap shell # Begin an interactive Capistrano session."
} ,
{
"title" : "Rake Tasks",
"category" : "",
"tags" : "",
"url" : "/2010/07/rake-tasks/",
"date" : "2010-07-16 20:37:56 -0400",
"content" : "If you’re wanting to know which Rake tasks are available for you to use fromthe command line, simply use the ‘rake -T’ command:$ rake -T(in /Users/jason/railsproject)rake db:abort_if_pending_migrations # Raises an error if there are pending migrationsrake db:charset # Retrieves the charset for the current environment's databaserake db:collation # Retrieves the collation for the current environment's databaserake db:create # Create the database defined in config/database.yml for the current RAILS_ENVrake db:create:all # Create all the local databases defined in config/database.ymlrake db:drop # Drops the database for the current RAILS_ENVrake db:drop:all # Drops all the local databases defined in config/database.ymlA really useful one is the ‘routes’ option which outputs a list of the routesconfigured.macbook:railsproject jason$ rake routes(in /Users/jason/railsproject) /:controller/:action/:id /:controller/:action/:id(.:format)"
} ,
{
"title" : "MySQL Gem Installation on Mac 10.5.8 - 64 bit??",
"category" : "",
"tags" : "",
"url" : "/2010/07/mysql-gem-installation-on-mac-10-5-8-64-bit/",
"date" : "2010-07-16 20:26:14 -0400",
"content" : "I’m setting up a new Ruby on Rails application, and tried to run the firstmigration for the creation of the new database. This failed because I didn’thave the MySQL gem installed. I’m using a 64 bit processor (Intel Core 2 Duo)so I installed the 64 bit MySQL for 10.5.8 (Leopard, I haven’t upgraded toSnow Leopard yet).When trying to run the installation command I received an error:$ sudo gem install mysqlPassword:Building native extensions. This could take a while...ERROR: Error installing mysql: ERROR: Failed to build gem native extension./opt/local/bin/ruby extconf.rbchecking for mysql_query() in -lmysqlclient... nochecking for main() in -lm... yeschecking for mysql_query() in -lmysqlclient... nochecking for main() in -lz... yeschecking for mysql_query() in -lmysqlclient... nochecking for main() in -lsocket... nochecking for mysql_query() in -lmysqlclient... nochecking for main() in -lnsl... nochecking for mysql_query() in -lmysqlclient... nochecking for main() in -lmygcc... nochecking for mysql_query() in -lmysqlclient... no*** extconf.rb failed ***Could not create Makefile due to some reason, probably lack ofnecessary libraries and/or headers. Check the mkmf.log file for moredetails. You may need configuration options.I looked online for a solution to this, and found that you have to point to thedirectory where MySQL is installed. I tried:sudo gem install mysql -- --with-mysql-dir=/usr/local/mysqlThis resulted in this error instead:Building native extensions. This could take a while...ERROR: Error installing mysql: ERROR: Failed to build gem native extension./opt/local/bin/ruby extconf.rb --with-mysql-dir=/usr/local/mysqlchecking for mysql_ssl_set()... nochecking for rb_str_set_len()... nochecking for rb_thread_start_timer()... nochecking for mysql.h... nochecking for mysql/mysql.h... no*** extconf.rb failed ***Could not create Makefile due to some reason, probably lack ofnecessary libraries and/or headers. Check the mkmf.log file for moredetails. You may need configuration options.Since the error is reporting that it can’t find mysql.h (header file), I takeit that the MySQL installer didn’t include the header files. From the commandline if I go to /usr/local/mysql/include I see the mysql.h right in there.I removed the preference panel option by opening the System Preferences, thenholding CTRL and clicking on the MySQL option. This gave me an option to clickon to remove it. I then deleted /usr/local/mysql and/usr/local/mysql-5.1.48-osx10.5-x86_64sudo rm -rf /usr/local/mysqlsudo rm -rf /usr/local/mysql-5.1.48-osx10.5-x86_64/Someone else mentioned something about using the 32-bit version of MySQL, so Idownloaded it and installed it instead (hoping it doesn’t conflict with theprocessor I’m using). It installed and started up just fine. I ran the commandto install the MySQL Gem again:$ sudo gem install mysql -- --with-mysql-dir=/usr/local/mysqlBuilding native extensions. This could take a while...Successfully installed mysql-2.8.11 gem installedInstalling ri documentation for mysql-2.8.1...No definition for next_resultNo definition for field_name...No definition for error_sqlstateInstalling RDoc documentation for mysql-2.8.1...No definition for next_resultNo definition for field_name...No definition for error_sqlstateFinally it installed just fine."
} ,
{
"title" : "Wordpress Plugin - Custom Pages?",
"category" : "",
"tags" : "wordpress, plugin, permalinks",
"url" : "/2010/06/wordpress-plugin-custom-pages/",
"date" : "2010-06-29 01:39:15 -0400",
"content" : "My DilemaOkay. I’ve worked on making a Wordpress plugin once. It’s pretty easy to makea plugin which replaces a tag such as [another-plugin-tag parameter="value"]with some sort of other HTML code. For instance it’s pretty straight forwardto replace [iframe http://www.google.com/ 800 600] with an iframe tag.Something I’ve found difficult to find however is how you can create custompages as soon as the plugin is activated, which are accessible using apermalink such as http://www.wordpress-site.com/myplugin/search/ which cansubmit a form to another URL such ashttp://www.wordpress-site.com/myplugin/results/ and then provide the resultswith a URL such as http://www.wordpress-site.com/myplugin/results/id/3/ oranything else pretty like that.And I’m not talking about searching for posts or pages or anything. I’mtalking about extending Wordpress to have functionality which is not blogrelated, while still being a plugin.I installed the ‘Contact Form 7’ plugin to see how it submitted the form, andthen I realized it uses Ajax. Great. I don’t want Ajax.A Hint of a SolutionI searched online looking for something to explain this, because certainlysomeone else must have been scratching their head like I have. No guidesseemed to explain this to me. I’d search for ‘Wordpress plugin permalinks’ andI’d only find plugins that deal with permalinks somehow (not what I waslooking for).And I was ignoring all the documentation on hooks and filters, because I don’twant to filter normal blog content, or hook to some blog content. But I wasmistaken. I do want to hook a function to something. It turns out thatWordpress has a number of actions which it goes through when loading a normalpage, available by name in the Plugin API Action Reference page.At some point of the page loading the permalink style URL, which is basicallymade possible by a mod_rewrite rule which says that any address is processedby index.php. The Wordpress system determines if the URL relates to a page orpost or something, or otherwise provides a 404 style error. Okay, so if I cansomehow tell Wordpress - “Yes! There is a /myplugin/ page”, or “Yes! There isa /myplugin/results/” page, then I’ll have one step of my solution finished.After further researching I found that there is an article on howWordpress processes a request, and it even mentions GET and POSTsubmissions. This was also obviously hard because ‘post’ is the term used torefer to the blog post records, so a search on Google for ‘Wordpress postrequest’ didn’t return something relevant.To Be ContinuedI’m going to continue to investigate how to build the type of plugin whichprovides custom URL’s, without requiring the existence of pages for theseURLs, and also somehow block the creation of pages which use the permalinkstructure used by the plugin.Update: It’s been a while since I posted this, but I found that there arerewrite rules stored somewhere in a serialized format or something in thewp_options table.There are some functions you can use to add or modify the rewrites/routes sothat they point to your custom script. Once a certain rule/route is pointingto your own plugin script, you can do whatever you want with the requests….serve up multiple pages, etc."
} ,
{
"title" : "Ubuntu 9.10 Karmic Koala - VNC resolution limited without monitor",
"category" : "",
"tags" : "",
"url" : "/2010/02/ubuntu-9-10-karmic-koala-vnc-resolution/",
"date" : "2010-02-20 10:29:28 -0500",
"content" : "Update 04/23/2010 - I’m not finding a solution to this issue. Sorry. I’velost interest.I recently setup Ubuntu 9.10 on a desktop system, so I could use it as a fileserver. I’m was able to enable the remote desktop feature for it, which isbasically a VNC server.The issue is that once I disconnected a monitor from the computer and set itup next to my router (plugged directly in), and restarted it, VNC would onlywork with a maximum resolution of 640x480.In a forum someone pointed out that this configuration added to/etc/X11/xorg.conf would save the day:Section "Device" Identifier "VNC Device" Driver "vesa"EndSectionSection "Screen" Identifier "VNC Screen" Device "VNC Device" Monitor "VNC Monitor" SubSection "Display" Modes "1024x768" EndSubSectionEndSectionSection "Monitor" Identifier "VNC Monitor" HorizSync 30-70 VertRefresh 50-75EndSectionI just restarted after my ‘sudo service gdm restart’ command didn’t seem towork. I think this might have something to do with a special Nvidia driver I’musing. Hm…And I thought this most recent article was giving methe holy grail.Eh. I’m going to bed now. It’s 3:29 AM. I’ll slay this dragon later perhaps."
} ,
{
"title" : "PHP Not Parsing on Debian / Ubuntu server with Apache2",
"category" : "",
"tags" : "",
"url" : "/2010/02/php-not-parsing-on-debian-ubuntu-server/",
"date" : "2010-02-11 00:03:59 -0500",
"content" : "My friend Marshall was recently having issues getting PHP5 installed andworking on his Ubuntu server, which is a Debian based distribution.We updated all the packages involved…Apache2, php5, libapache2-mod-php5,made sure the module was installed, restarted Apache2, etc. Nothing worked.It turns out that the default php5.conf configuration for Debian / Ubuntu’spackages are using an incorrect syntax. Edit /etc/apache2/mods-available/php5conf to reflect:<FilesMatch \.php$> SetHandler application/x-httpd-php</FilesMatch>…instead of…AddType application/x-httpd-php .phpSpecial thanks to this Apache wiki article for pointing this out:http://wiki.apache.org/httpd/DebianPHPI’m just posting this solution here for all the other nerds having the sameissue that aren’t finding this article in Google."
} ,
{
"title" : "Dell Dimension 3000 - Audio is Choppy",
"category" : "",
"tags" : "",
"url" : "/2010/01/dell-dimension-3000-audio-is-choppy/",
"date" : "2010-01-13 21:49:45 -0500",
"content" : "At work I have a Dell Dimension 3000 workstation, and for months I’ve put upwith the computer being kind of slow. I just figured it was due to thecomputer being kind of old…but I brought in some headphones so I could dosome serious music listening while I work and just got fed up with the way thesound was choppy. Anytime I’d do anything processor intensive the music I waslistening to in Grooveshark (which I highly recommend you checkout) would sound like crap.So I installed the latest sound driver from Dell.com for the integrated soundcard and still after rebooting it sucked. Also when I start the computer up inthe morning, I would have to wait like 3 minutes at least for all the startupprograms to load. Nothing else would open until this was done.So anyway, I searched for a solution to this issue and found this post:choppy sound on DELL Dimension 3000It turns out that the Primary IDE controller in Windows XP was set to use somesort of PIO mode to communicate with the hard drive on the computer, asopposed to DMA mode. DMA stands for Direct Memory Access, and is totally moreefficient than the PIO mode, Programmed Input-Output, where the centralprocessor transfers data byte for byte or word for word through your system tothe other components (like your sound card).My computer runs blazing fast now in comparison to how it was running before.I’m so glad I checked into this. If you have a Dell Dimension 3000 and it’srunning like crap, definitely try this."
} ,
{
"title" : "Un-Hide Someone in Facebook",
"category" : "",
"tags" : "",
"url" : "/2010/01/un-hide-someone-in-facebook/",
"date" : "2010-01-11 20:40:15 -0500",
"content" : "My aunt recently asked me how to un-hide someone from the news feed for herFacebook account.Like many other people before her, she’s pressed the ‘Hide’ button forsomeone, then realized it was a mistake…but couldn’t find out how to unhidethem.To add someone back to your Facebook news feed, simply scroll to the bottom ofthe page and click on the ‘Edit Options’ link for your news feed.After you’ve done this, simply click on the ‘Add to News Feed’ button to addupdates regarding one of your friends, or other pages, back to your Facebooknews feed."
} ,
{
"title" : "Selenium - no display specified",
"category" : "",
"tags" : "",
"url" : "/2010/01/selenium-no-display-specified/",
"date" : "2010-01-11 17:51:10 -0500",
"content" : "I’m not very experienced with X-windows on the Linux platform, so I’m not tooskilled in troubleshooting issues with the display. I recently upgraded anUbuntu system at work to use Ubuntu 9.04 (Jaunty Jackalope), which only hasFirefox 3 available (no package for Firefox 2). I had a Selenium server setuprunning tests, but they stopped working after I upgraded to this newer versionof Ubuntu.I thought that perhaps Selenium wasn’t compatible with version 3 of Firefox,but this isn’t the case. The Selenium website says ‘Firefox 2+’ for browsersrunning on Linux.The error I was receiving when I would run a test was:10:46:19.778 INFO - Preparing Firefox profile...Error: no display specifiedAfter a bunch of research and Googling online, it turned out I just needed torun this before I started my Selenium server:export DISPLAY=:0I hope this saves someone else some time."
} ,
{
"title" : "Common Computer Mistakes",
"category" : "",
"tags" : "",
"url" : "/2009/11/common-computer-mistakes/",
"date" : "2009-11-02 17:59:02 -0500",
"content" : "Whether you are an individual, or a startup business, you’ll more than likelyhave a website or a computer. I’ve had my fair share of experiences withpersonal computers (PCs and Macs), and I’ve also worked at a website hostingcompany, HostDime.I’ve often cringed due to the negligence of individuals and/or companiesregarding the following items. Here are some tips to save you from largelosses.Data BackupsPeople often assume that they can accumulate all their photos, music, videos,documents, and other very important business related files on their computersand it’s all safe and sound. However, this is not really the case. The harddrive inside your computer will eventually deteriorate, if not completely stopworking out of the blue. It’s just a matter of time. Sure, it’s not likelythat this will happen with a brand new Seagate hard drive with a 5 yearwarranty, but it’s surely possible that it could occur at any time. Do youwant to leave the only copies of all your important files up to chance?I recommend that if you’re an individual, go to a store such as Best Buy andpurchase a backup drive, and set up your operating system or backup softwarewhich is packaged with the drive, so that it syncs up data with your computershard drive.If you’re a business, then you might want to setup a file server of some sorton the network, and map an icon on each computer to the drive on this server.Tell your staff to store important files on this drive, or at least updatecopies of their important files on this drive. If their computer crashes,you’ll have a backup, and you’ll be able to easily backup all these importantfiles from the file server to another backup drive just the same. This willkeep you from having to go from computer to computer backing up the files,since they are all in one location on the file server.Operating System / Recovery DiscsThis applies more to those with personal computers, but also to businesses. Ifyou purchase a computer, it will come with either installation discs for theoperating system (typically a version of Windows or Mac OSX), or with special‘Recovery’ discs. Along with these discs there may also be documentation whichincludes registration keys of some sort. Place these in a safe location, anddo not let them get mixed in with all the other junk in your desk.Often when a persons computer crashes (see above on data back ups), or theyget a virus, or anything thats makes the computer unusable, the best solutionis to erase everything from the hard drive and restore the computer to the wayit was when you first got it…a fresh installation of Windows/MacOSX, withthe default programs, just like you just got the computer from the store. Noviruses, no other programs installed.To achieve this, you need to have the discs that came with your computer, andany necessary registration keys for the operating system.If you’ve lost these, then the restoration of your computer to it’s originalstate will not be possible until you either order new restore discs from thecompany you bought your computer from, or purchase new operating system discsand registration keys for the full retail price (or OEM price if you know howto search and buy it cheaper).Domain Name RegistrationA common issue I encounter with some website owners is that they hire someindividual to setup their website, and then the relationship goes sour and thedesigner/developer holds their website hostage. Certainly it’s fine for anindividual to hold the work they’ve done on your website until they are paidfor their services, but to hold the data you’ve paid for already, as well asyour domain name, certainly amounts to a legal travesty.To avoid this situation it is highly recommended that you register or transferyour domain name to a well known domain registration service provider, such asNamecheap, or Register.com. There are surely other large organizationswhich will not hold your domain hostage.Once you’ve transfered your domain name to an account with a reputableregistration organization, make sure you put a very complex password on theaccount, and also ensure that your domains are updated with your contactinformation, and that they are Locked. This will ensure that your accountcannot be hacked into, you are reflected as the owner of the domain(Administrative Contact), and if the domain is locked it cannot be transferedto another company (stolen) by any means.If your domain is in your ownership, it can easily be updated to point to thenameserver addresses of any company which is hosting the website, even if it’sthe hosting account setup by your designer/developer that is providing acomplete website solution to you (website design, development, and hosting).The one thing you don’t want them having control over is your domain however,because unlike the design of your website, the domain cannot be replaced.Website BackupsIt can be very convenient for your website designer/developer to host yourwebsite for you. This gives them more control over the website, they arefamiliar with the features of the hosting, and providing the hosting to yousupplements their income and keeps them in business (design/development workisn’t always available for some).This is perfectly fine, and I’m not intending to cause any alarm, but youcertainly want to have a copy of what you’re paying for …and the only way toachieve this is to periodically obtain backups of your website. Ask yourdesigner/developer if they have access to a control panel, such as cPanel,which allows them to download backups of their website files and databases(if applicable).With these files you can always setup your website elsewhere without thedesigner/developer, or the same hosting company. Even with a reputabledesigner/developer, or hosting company, the server hosting your website useshard drives which may crash at any moment.Even though these companies do try to keep backups for you, most do notguarantee that backups will be available for your data. The best solution isto keep a copy for yourself just in case the worst scenario arises."
} ,
{
"title" : "Rounded Corners",
"category" : "",
"tags" : "",
"url" : "/2009/04/rounded-corners/",
"date" : "2009-04-24 20:11:18 -0400",
"content" : "One of the most popular Web 2.0 practices is rounded corners. How do you getthem without uploading images, and nesting DIV’s, and worrying about othercomplications that can break your precious rounded corners?Answer! jQuery CornersjQuery Corners is compatible with Firefox, Internet Explorer 6+, Safari(including iPhone), Google Chrome, and Opera 9.0.All it takes is a simple jQuery style selector call such as the following:<script> $(document).ready( function() { $('.rounded').corners(); });</script><div style="background-color:#acc; padding:5px" class="rounded">Simple Example</div>You can also experiment further with documented options to change the radius(amount of curve) for the rounded corners, and will even show properly ifthere is a background image specified inside of the object with roundedcorners. Download jQuery Corners jQuery Corners Documentation"
} ,
{
"title" : "Database Schema Information",
"category" : "",
"tags" : "mysql, rails",
"url" : "/2009/03/database-schema-information/",
"date" : "2009-03-13 20:20:50 -0400",
"content" : "It can be very useful to have the database table schema information availableto you when you are working on a model in a Ruby on Rails application. Thereis a plugin available which provides the schema information in comments at thetop of each model called Annotate Models Plugin.# == Schema Information# Schema version: 20090215021706## Table name: orders## id :integer(11) not null, primary key# order_number :integer(11) default(0), not null# created_on :datetime# shipped_on :datetime# order_user_id :integer(11)# order_status_code_id :integer(11) default(1), not null# notes :text# referer :string(255)# order_shipping_type_id :integer(11) default(1), not null# product_cost :float default(0.0)# shipping_cost :float default(0.0)# tax :float default(0.0), not null# auth_transaction_id :string(255)# promotion_id :integer(11) default(0), not null# shipping_address_id :integer(11) default(0), not null# billing_address_id :integer(11) default(0), not null# order_account_id :integer(11) default(0), not null# subscription_id :integer(11)You can install the plugin using the following command from the root of yourRails application.script/plugin install http://repo.pragprog.com/svn/Public/plugins/annotate_modelsAfter you are done installing the plugin, simply run the rake task by usingthis command:rake annotate_models"
} ,
{
"title" : "PHP Compilation",
"category" : "",
"tags" : "php",
"url" : "/2005/02/php-compile/",
"date" : "2005-02-28 05:22:01 -0500",
"content" : "I’m pretty proud of myself. I just configured PHP to compile without anyproblems…well no imap support, but that can wait.I saw that someone asked how to resolve the problem, but had no answer, andthen I realized with the help of an answer from someone else on a messageboardthat you have to install the development version of the libraries to compilecertain support into a program.Here is my chance to give back to the Linux world by helping another newbieout (without acting like I was above him or anything, just a newbie helping anewbie).See Post"
} ,
{
"title" : "MBox and Linux Test Server",
"category" : "",
"tags" : "mbox, linux",
"url" : "/2005/02/mbox-and-linux-test-server/",
"date" : "2005-02-23 18:25:01 -0500",
"content" : "Well. I hooked up the MBox last night, and just like before but even moredisappointing, the damn thing is still creating a digital hum and static,triggered by the movements of the mouse and the processing of programs. I don’tknow if its some sort of electrical interference, a bad USB cable, or both.I hope I can find the solution. The first thing I’m going to do is replace thecable and try to clean the connectors in the MBox.I got the web server running on my little Linux test server. I was hostingredconfetti.net on the server that this website is hosted on for a while,but recently I moved the DNS hosting onto my server, along withredconfetti.net.I currently have various videos hosted off this server. They are availablefrom the old redconfetti.net site - Misc VideosI’m going to continue to work on redconfetti.net as a Linux tutorial site. Iwant to provide to others the solutions to the obstacles I meet and haveovercome. I don’t know if other people will have the same issues I’m having,and I am trying to read the manual, but oh well."
} ,
{
"title" : "PHP/MySQL Bug Tracking",
"category" : "",
"tags" : "bug tracking",
"url" : "/2004/09/bug-tracking/",
"date" : "2004-09-09 16:51:00 -0400",
"content" : "For anyone who needs a free web based Bug Tracking system programmed usingPHP/MySQL, check out Flyspray.I searched very far to find a web based program to track the bugs in thewebsite I work on in ASP/MS-SQL. I know PHP/MySQL so I was able to setup thisbug tracker, and fortunately I can modify it to do things I need it to do.Its features appear to be good enough for tracking bugs in actual software,and it can be setup to suit those debugging web based software."
}
,
{
"title" : "C Programming Language",
"category" : "",
"tags" : "",
"url" : "/resources/cheat-sheets/c/",
"date" : "",
"content" : "Back to Cheat SheetsData Types Type Keyword Bytes Range Character char 1 -128 to 128 Short integer short 2 -32767 to 3276 Integer int 4 -2,147,483,647 - 2,147,483,647 Long Integer long 4 -2,147,483,647 - 2,147,483,647 Long Long Integer long long 8 -9,223,372,036,854,775,807 - -9,223,372,036,854,775,807 Unsigned character unsigned char 1 0 to 255 Unsigned short integer unsigned short 2 0 to 65525 Unsigned integer unsigned int 4 0 to 4,294,967,295 Unsigned long integer unsigned long 4 0 to 4,294,967,295 Unsigned long long integer unsigned long long 8 0 to 18,446,744,073,709,551,615 Single-precision floating-point float 4 1.2E-38 to 3.4E38¹ Double-precision floating-point double 8 2.2E-308 to 1.8E308² Constants#define PI 3.14159const int count = 100;Operator Precedence Operators Relative Precedence ++ – 1 * / % 2 + - 3 "
} ,
{
"title" : "Categories",
"category" : "",
"tags" : "",
"url" : "/categories/",
"date" : "",
"content" : ""
} ,
{
"title" : "Blender Cheat Sheets",
"category" : "",
"tags" : "",
"url" : "/resources/notes/blender/cheat-sheets/",
"date" : "",
"content" : "Back to Blender Notes Index Blender Hotkey Reference Blender Manual - Keymap Cheatography - Blender Full Keyboard Shortcuts"
} ,
{
"title" : "Docker",
"category" : "",
"tags" : "",
"url" : "/resources/cheat-sheets/docker/",
"date" : "",
"content" : "Back to Cheat SheetsDocker# View docker client and server version infodocker version# List Running Containersdocker ps# List All Containersdocker ps -a# Initialize and start an 'nginx' container with "web" as namedocker run -d -P --name web nginx# Initialize a container running in the background with a local directory mounted in the container, based on the nginx containerdocker run -d -P -v $HOME/site:/usr/share/nginx/html --name mysite nginx# Initialize a container in interactive mode with a pseudo-terminal emulator (pty)# running Bash. Remove container upon exit.docker run -i -t --rm ruby:2.6 bash# View container portsdocker port [container_id]# Start containerdocker start [container_id]# Restart containerdocker restart [container_id]# Stop Containerdocker stop [container_id]# SSH into running Docker container for inspection / debuggingdocker attach [container_id]# Remove Docker container (requires stop to remove)docker stop [container_id]docker rm [container_id]# Display stdout from containerdocker logs [container_id]# Display processes for containerdocker top [container_id]# Copy files/folders from the local filesystem into a containerdocker cp foo.txt mycontainer:/foo.txtdocker cp /Projects/myapp mycontainer:/app# Copy files/folders from the contain to the local filesystemdocker cp [container_name]:/foo.txt foo.txt# Get JSON data on containerdocker inspect [container_id]# View local docker imagesdocker images# Remove docker imagedocker rmi [image_id]# Search for Docker imagesdocker search [image_name]# Download Docker image for later usedocker pull [image_name]# Build Docker image from Git repository (requires Dockerfile)docker build https://github.com/username/reponame.git#master# Tag Docker image in preparation for push to remote repositorydocker tag [image_id] [repository_url]:[port]/[name]:[tag]# Push Docker image to remote repositorydocker push [repository_url]:[port]/[name]:[tag]"
} ,
{
"title" : "ES2015 - The Shape of Javascript to Come",
"category" : "",
"tags" : "",
"url" : "/resources/notes/javascript/es2015/",
"date" : "",
"content" : "ES2015 - The Shape of Javascript to ComeES2015, formerly known as ES6, is the most extensive update to the JavaScript language since the publication of itsfirst edition in 1997.The committee decided to use the year of release, instead of version number.Level 1 - DeclarationsThe exercises will focus on a forum web app. The first feature will be loading users profile into the sidebar of thesite.Declarations with letThe loadProfiles function takes an array of users and adds their profile to the sidebar.<!DOCTYPE html> <!-- ... --> <script src="./load-profiles.js"> <script> loadProfiles(["Sam", "Tyler", "Brook", "Jason"]); </script></html>The loadProfiles Functionfunction loadProfiles(userNames) { if (userNames.length > 3) { var loadingMessage = "This might take a while..." _displaySpinner(loadingMessage) } else { var flashMessage = "Loading Profiles" _displayFlash(flashMessage) } console.log(flashMessage) // returns undefined // ... fetch names and build sidebar}Javascript detects undeclared variables and declares them at the top of the current scope. This is known ashoisting.One way to avoid confusion is to use let.let loadingMessage = "This might take a while..."Variables declared with let are not hoisted to the top of the scope. Instead of the hoisting occurring, thus leadingto an undeclared value for the references, you’ll instead get a ReferenceError informing you that the variable isnot defined.Declarations with let in for loopsWhen using var in for loops, there can be unexpected behavior.function loadProfiles(userNames) { for (var i in userNames) { _fetchProfile("/users/" + userNames[i], function() { console.log("Fetched for ", userNames[i]) }) }}This results in output:Fetched for AlexFetched for AlexFetched for AlexFetched for AlexThe last element of the array is outputted all 4 times because the _fetchProfile method is delayed in it’s executiondue to an AJAX call, so when it references the variable i, the iterations have completed and the value of i isset to 3 as it’s final value. When the callbacks calls, upon completion of the AJAX request, it references the 3and ends up outputting ‘Alex’ as the name.This is because the var i is hoisted to the top of the function and declared in that scope, and then other referencesto i. If this is replaced with let, a new i variable is created on each iteration.for (let i in userNames) {}Variables declared with let can be reassigned, but cannot be redeclared within the same scope.// no problemlet flashMessage = "Hello"flashMessage = "Goodbye"// problemlet flashMessage = "Hello"let flashMessage = "Goodbye" // results in a TypeError// no error because defining in a different scopelet flashMessage = "Hello"function loadProfiles(userNames) { let flashMessage = "Loading profiles" return flashMessage}Declarations with constMagic Number - A literal value without a clear meaning. If you end up using the number multiple times, it will lead tounnecessary duplication, which is bad code. People won’t know if these literal values are related or not.By using const we can create a read-only named constant.const MAX_USERS = 3if (userNames.length > MAX_USERS) { // ...}You cannot redefine a constant after it has been defined. Constants also require an initial value.// will result in errorconst MAX_USERS;MAX_USERS = 10;Block ScopedConstants are blocked scoped, which mean they are not hoisted to the top of the function. So if you define somethingwithin an if block, and try to access it from outside, it will return an error.if (userNames.length > MAX_USERS) { const MAX_REPLIES = 15} else { // ...}console.log(MAX_REPLIES) // ReferenceError. MAX_REPLIES is not defined)Level 2 - FunctionsFunctionsDefault ParametersUnexpected arguments might cause errors during function execution. This code runs just fine, because it’s passing anarray to the function as expected.loadProfiles(["Sam", "Tyler", "Brook"])However what if it was passed no arguments, or an undefined value.function loadProfiles(userNames) { // TypeError: Cannot read property ‘length’ of undefined let namesLength = userNames.length}A common practice is to validate the presence of arguments in the beginning of the function.let names = typeof userNames !== "undefined" ? userNames : []We can make this code better by defining default values for the parameters.function loadProfiles(userNames = []) { let namesLength = userNames.length console.log(namesLength)}Named ParametersThe options object is a widely used pattern that allows user-defined settings to be passed to a function in theform of properties on an object.function setPageThread(name, options = {}) { let popular = options.popular let expires = options.expires let activeClass = options.activeClass}setPageThread("New Version out Soon!", { popular: true, expires: 10000, activeClass: "is-page-thread"})This approach doesn’t make it very clear which options the function expects, and it requires the definition ofboilerplate code to handle the option assignment.Using named parameters for optional settings makes it easier to understand how a function should be invoked.function setPageThread(name, { popular, expires, activeClass }) { console.log("Name: ", name) console.log("Popular: ", popular) console.log("Expires: ", expires) console.log("Active: ", activeClass)}Now we know which arguments are available. The function call remains the same. Each property of the parameter is mappedto the argument appropriately.It’s NOT okay to omit the options argument altogether when invoking a function with named parameters when no defaultvalue is set for them.// results in TypeError: Cannot read property 'popular' of undefinedsetPageThread("New Version out Soon!")// sets default value, resulting in all becoming undefinedfunction setPageThread(name, { popular, expires, activeClass } = {}) { console.log("Name: ", name) console.log("Popular: ", popular) console.log("Expires: ", expires) console.log("Active: ", activeClass)}// defaults the value of popular to an empty string// while still defaulting the entire options hash if not providedfunction setPageThread(name, { popular = "", expires, activeClass } = {}) { console.log("Name: ", name) console.log("Popular: ", popular) console.log("Expires: ", expires) console.log("Active: ", activeClass)}Rest Parameter, Spread Operator, and Arrow FunctionsTags are a useful feature in web applications that have lots of read content. It helps filter results down to specifictopics. Let’s add these to the forum.We want our displayTags function to operate as follows:// variadic functions can accept any number of argumentsdisplayTags("songs")displayTags("songs", "lyrics")displayTags("songs", "lyrics", "bands")Arguments ObjectIn classic Javascript we could have used the arguments object, which is a build-in Array-like object that correspondsto the arguments of a function.This is not ideal because it’s hard to tell which parameters this function expects to be called with. Developers mightnot know where the arguments reference comes from (outside the scope of the function??).function displayTags() { for (let i in arguments) { let tag = arguments[i] _addToTopic(tag) }}If we change the function signature, it will break our code also.function displayTags(targetElement) { let target = _findElement(targetElement) for (let i in arguments) { let tag = arguments[i] // becomes broken because the // first argument is no longer a tag _addToTopic(tag) }}Rest ParameterThe new rest parameter syntax allows us to represent an indefinite number of arguments as an Array. This way,changes to function signature are less likely to break code.The three dots make tags a rest parameter.function displayTags(...tags) { // tags in an array object for (let i in tags) { let tag = tags[i] _addToTopic(tag) }}function displayTags(targetElement, ...tags) { // ...}The rest parameter must always go last in the function signature.Spread OperatorWe need a way to convert an Array into individual arguments upon a function call.getRequest("/topics/17/tags", function(data) { let tags = data.tags displayTags(tags) // tags is an Array})Our function is expecting to be called with individual arguments, not a single argument that is an Array. How can weconvert the Array argument into individual elements on the function call?getRequest("/topics/17/tags", function(data) { let tags = data.tags displayTags(...tags) // tags is an Array})Prefixing the tags array with the spread operator makes it so that the call is the same as callingdisplayTags(tag, tag, tag).The syntax for rest parameters and the spread operator look the same, but the former is used infunction definitions and the later in function invocations.JavaScript ObjectsJavaScript objects can help us with the encapsulation, organization, and testability of our code.Functions like getRequest and displayTags should not be exposed to caller code.getRequest("/topics/17/tags", function(data) { let tags = data.tags displayTags(...tags)})We want to convert code like above, into code like this:let tagComponent = new TagComponent(targetDiv, "/topics/17/tags")tagComponent.render()The TagComponent object encapsulates the code for fetching tags and adding them to a page.function TagComponent(target, urlPath) { this.targetElement = target this.urlPath = urlPath}TagComponent.prototype.render = function() { getRequest(this.urlPath, function(data) { // ... })}Properties set on the constructor function can be accessed from other instance methods. This is why the reference tothis.urlPath works within the render() method.Issues with Scope in Callback FunctionsAnonymous functions passed as callbacks to other functions create their own scope.function TagComponent(target, urlPath) { // this scope within the component object is not the same // as the anonymous function assigned to 'render' below this.targetElement = target this.urlPath = urlPath}TagComponent.prototype.render = function() { getRequest(this.urlPath, function(data) { let tags = data.tags // this.targetElement returns undefined displayTags(this.targetElement, ...tags) })}Arrow FunctionsArrow functions bind to the scope of where they are defined, not where they are used. This is also known aslexical binding.function TagComponent(target, urlPath) { this.targetElement = target this.urlPath = urlPath}TagComponent.prototype.render = function() { // arrow functions bind to the lexical scope getRequest(this.urlPath, data => { let tags = data.tags displayTags(this.targetElement, ...tags) })}Level 3 - Objects, Strings, and Object.assignObjects and StringsThe buildUser function returns an object with the first, last, and fullName properties.function buildUser(first, last) { let fullName = first + " " + last return { first: first, last: last, fullName: fullName }}let user = buildUser("Sam", "Williams")As you can see, we end up repeating the same thing as the key and value here in the return statement.Object InitializerWe can shorten this by using the object initializer shorthand, which removes duplicate variable names.return {first, last, fullName}; // way cleanerThis only works when the properties and values use the same name. It works anywhere a new object is returned, not justfrom functions.let name = "Sam"let age = 45let friends = ["Brook", "Tyler"]let user = { name, age, friends }Object Destructuring// generates 3 separate variables based on the object returnedlet { first, last, fullName } = buildUser(“Sam”, “Williams”);console.log(first); // > Samconsole.log(last); // > Williamsconsole.log(fullName); // > Sam WilliamsNot all of the properties have to be destructured all the time. We can explicitly select the ones we want.let { fullName } = buildUser("Sam", "Williams")console.log(fullName)In previous versions of JavaScript, adding a function to an object required specifying the property name and thenthe full function definition (including the function keyword);function buildUser(first, last, postCount) { let fullName = first + " " + last const ACTIVE_POST_COUNT = 10 return { first, last, fullName, isActive: function() { return postCount >= ACTIVE_POST_COUNT } }}A new shorthand notation is available for adding a method to an object where the keyword function is no longernecessary.return { first, last, fullName, isActive() { return postCount >= ACTIVE_POST_COUNT; }Template StringsTemplate strings are string literals allowing embedded expressions. This allows for a much better way to dostring interpolation.function buildUser(first, last, postCount) { let fullName = first + " " + last const ACTIVE_POST_COUNT = 10 // ...}You can instead use back ticks, with a dollar sign and curly brace syntax for interpolated variables.function buildUser(first, last, postCount) { let fullName = `${first} ${last}` // back-ticks, NOT single quotes const ACTIVE_POST_COUNT = 10 // ...}Template strings offer a new - and much better- way to write multi-line strings.let userName = "Sam"let admin = { fullName: "Alex Williams" }let veryLongText = `Hi ${userName},this is a veryverylong text.Regards, ${admin.FullName}`console.log(veryLongText)Object.assignIn this example we’ll add a count-down timer to a forum. The countdown timer displays the time left for users to undotheir posts after they’ve been created. Once the time is up, they cannot undo it anymore.We want to make our timer function reusable so that it can be used by other applications and domains.// simple examplecountdownTimer($(".btn-undo"), 60)// container class specifiedcountdownTimer($(".btn-undo", 60, { container: ".new-post-options" }))// container class and time unitscountdownTimer( $(".btn-undo", 60, { container: ".new-post-options", timeUnit: "minutes", timeoutClass: ".time-is-up" }))For functions that need to be used across different applications, it’s okay to accept an options object instead ofusing named parameters.// too many options, difficult to interpret calls to this functionfunction countdownTimer( target, timeLeft, { container, timeUnit, clonedDataAttribute, timeoutClass, timeoutSoonClass, timeoutSoonSeconds } = {}) { // ...}// easier to customize to different applicationsfunction countdownTimer(target, timeLeft, options = {}) { // ...}Some options might not be specified by the caller, so we need to have default values.function countdownTimer(target, timeLeft, options = {}) { let container = options.container || ".timer-display" let timeUnit = options.timeUnit || "seconds" let clonedDataAttribute = options.clonedDataAttribute || "cloned" let timeoutClass = options.timeoutClass || ".is-timeout" let timeoutSoonClass = options.timeoutSoonClass || ".is-timeout-soon" let timeoutSoonTime = options.timeoutSoonSeconds || 10}This works, but the default strings and numbers are all over the place, which makes the code hard to understand anddifficult to maintain.Using a local object to group default values for user options is a common practice and can help write moreidiomatic JavaScript. We want to merge options and defaults. Upon duplicate properties, those from optionsmust override properties from defaults.The Object.assign method copies properties from one or more source objects to a target object specified asthe first argument.function countdownTimer(target, timeLeft, options = {}) { let defaults = { container: ".timer-display", timeUnit: "seconds", clonedDataAttribute: "cloned", timeoutClass: ".is-timeout", timeoutSoonClass: ".is-timeout-soon", timeoutSoonTime: 10 } // we pass a {} because the target object is modified // and used as return value // Source objects remain unchanged let settings = Object.assign({}, defaults, options)}In case of duplicate properties on source objects, the value from the last object on the chain always prevails.Properties in options3 will override options2, and options2 will override options.function countdownTimer(target, timeLeft, options = {}) { let defaults = { // ... } let settings = Object.assign({}, defaults, options, options2, options3)}Because the target of Object.assign is mutated, we would not be able to go back and access the original default valuesafter the merge if we used it as the target// bad ideaObject.assign(defaults, options)// Okay alternative approachlet settings = {}Object.assign(settings, defaults, options)We want to preserve the original default values because it gives us the ability to compare them with the optionspassed, and act accordingly when necessary.function countdownTimer(target, timeLeft, options = {}) { let defaults = { // ... } let settings = Object.assign({}, defaults, options) // this wouldn't be possible without knowing if the argument // is different than the default if (settings.timeUnit !== defaults.timeUnit) { _conversionFunction(timeLeft, settings.timeUnit) }}Let’s run countdownTimer() passing the value for container as argument…countdownTimer($(".btn-undo"), 60, { container: ".new-post-options" })function countdownTimer(target, timeLeft, options = {}) { let defaults = { container: ".timer-display", timeUnit: "seconds" // ... } let settings = Object.assign({}, defaults, options) console.log(settings.container) // .new-post-options console.log(settings.timeUnit) // seconds}Level 4 - Arrays, Maps, and SetsArraysDestructuringWe typically access array elements by their index, but doing so for more than just a couple of elements can quicklyturn into a repetitive task.let users = ["Sam", "Tyler", "Brook"]// this will keep getting longer as we need to extract more elementslet a = users[0]let b = users[1]let c = users[2]console.log(a, b, c) // Sam Tyler BrookWe can use Array Destructuring to assign multiple values from an array to local variables.let users = ["Sam", "Tyler", "Brook"]let [a, b, c] = users // still easy to understand, AND less codeconsole.log(a, b, c) // Sam Tyler BrookValues can be discarded if desired.let [a, , b] = users // discarding "Tyler" valueconsole.log(a, b) // Sam BrookWe can combine destructuring with rest parameters to group values into other arrays.let users = ["Sam", "Tyler", "Brook"]let [first, ...rest] = users // groups remaining argument in an arrayconsole.log(first, rest) // Sam ["Tyler","Brook"]When returning arrays from functions, we can assign to multiple variables at once.function activeUsers() { let users = ["Sam", "Alex", "Brook"] return users}let active = activeUsers()console.log(active) // ["Sam", "Alex", "Brook"]let [a, b, c] = activeUsers()console.log(a, b, c) // Sam Alex BrookUsing for…ofThe for…of statement iterates over property values, and it’s a better way to loop over arrays and otheriterable objects.let names = ["Sam", "Tyler", "Brook"]for (let index in names) { console.log(names[index])}for (let name of names) { console.log(name)}For for..of statement cannot be used to iterate over properties in plain JavaScript objects out-of-the-box.let post = { title: "New Features in JS", replies: 19, lastReplyFrom: "Sam"}// this will not work// TypeError: post[Symbol.iterator] is not a functionfor (let property of post) { console.log("Value: ", property)}In order to work with for…of, objects need a special function assigned to the Symbol.iterator property. The presenceof this property allows us to know whether an object is iterable.let names = ["Sam", "Tyler", "Brook"]console.log(typeof names[Symbol.iterator]) // functionfor (let name of names) { console.log(name)}Since there is a function assigned, then the names array will work just fine with for..of.let post = { title: "New features in JS", replies: 19, lastReplyFrom: "Sam"}console.log(typeof post[Symbol.iterator]) // undefined// Results in TypeError: post[Symbol.iterator] is not a functionfor (let property of post) { console.log(property)}Finding an Element in an ArrayArray.find returns the first element in the array that satisfies a provided testing function.let users = [ { login: "Sam", admin: false }, { login: "Brook", admin: true }, { login: "Tyler", admin: true }]How can we find the first admin in the array?let admin = users.find(user => { return user.admin // returns first object for which user.admin is true})console.log(admin)We can alternatively shorten this function by omitting the curly braces and parenthesis in the function definition.let admin = users.find(user => user.admin)MapsMaps are a data structure composed of a collection of key/value pairs. They are very useful to store simpledata, such as property values. Each key is associated with a single value.Objects are first key/value stores that Javascript developers encounter, however when using Objects as maps, it’skeys are always converted to strings.let user1 = { name: "Sam" }let user2 = { name: "Tyler" }let totalReplies = {}totalReplies[user1] = 5totalReplies[user2] = 42console.log(totalReplies[user1]) // 42console.log(totalReplies[user2]) // 42console.log(Object.keys(totalReplies)) // [ "[object Object]" ]This happens because both objects are converted to the string [object Object] when they are used as keys inside oftotalReplies.We should stop using Javascript objects as maps, and instead use the Map object, which is also a simple key/valuedata structure. Any value may be used as either a key or a value, and objects are not converted to strings.let user1 = { name: "Sam" }let user2 = { name: "Tyler" }let totalReplies = new Map()totalReplies.set(user1, 5)totalReplies.set(user2, 42)console.log(totalReplies.get(user1)) // 5console.log(totalReplies.get(user2)) // 42We have to use the get() and set() methods to access values in Maps.Most of the time you will want to use the Map data structure, such as when keys are not known until runtime… such asuser input, or IDs generated by a database. You’ll still want to use Objects when the keys are static.Maps are iterable, so they can be used in a for…of loop. Each run of the loop returns a [key, value] pair foran entry in the Map.let mapSettings = new Map()mapSettings.set("user", "Sam")mapSettings.set("topic", "ES2015")mapSettings.set("replies", ["Can't wait!", "So cool"])for (let [key, value] of mapSettings) { console.log(`${key} = ${value}`)}WeakMapA WeakMap is a type of Map where only objects can be passed as keys. Primitive data types — such as strings,numbers, booleans, etc. — are not allowed.let user = {}let comment = {}let mapSettings = new WeakMap()mapSettings.set(user, "user")mapSettings.set(comment, "comment")console.log(mapSettings.get(user)) // userconsole.log(mapSettings.get(comment)) // commentmapSettings.set("title", "ES2015") // Invalid value used as weak map keyAll available methods on a WeakMap require access to an object used as a key.let user = {}let mapSettings = new WeakMap()mapSettings.set(user, "ES2015")console.log(mapSettings.get(user)) // "ES2015"console.log(mapSettings.has(user)) // trueconsole.log(mapSettings.delete(user)) // trueWeakMaps are not iterable, therefore they can’t be used with for…of.// error:// mapSettings[Symbol.iterator] is not a functionfor (let [key, value] of mapSettings) { console.log(`${key} = ${value}`)}Individual entries in a WeakMap can be garbage collected while the WeakMap itself still exists.let user = {} // all objects occupy memory spacelet userStatus = new WeakMap()userStatus.set(user, "logged") // Object reference passed as key to the WeakMap// ...someOtherFunction(user) // Once this function returns, 'user' can be garbage collected.WeakMaps don’t prevent the garbage collector from collecting objects currently used as keys, but that are no longerreferenced anywhere else in the system. The garbage collector removes the object from the WeakMap as well.SetsLimitations with ArraysArrays don’t enforce uniqueness of items. Duplicate entries are allowed.let tags = []tags.push("Javascript")tags.push("Programming")tags.push("Web")tags.push("Web")console.log("Total items ", tags.length) // Total items 4The Set object stores unique values of any type, whether primitive values or object references.let tags = new Set()tags.add("Javascript")tags.add("Programming")tags.add({ version: "2015" })tags.add("Web")tags.add("Web") // duplicate entries are ignoredconsole.log("Total items ", tags.size) // Total items 4Set objects are iterable, which means they can be used with for…of and destructuring.let tags = new Set()tags.add("Javascript")tags.add("Programming")tags.add({ version: "2015" })tags.add("Web")for (let tag of tags) { console.log(tag)}let [a, b, c, d] = tagsconsole.log(a, b, c, d) // Javascript Programming {version: '2015'} WebWeakSetThe WeakSet is a type of Set where only objects are allowed to be stored.let weakTags = new WeakSet()weakTags.add("JavaScript") // TypeError: Invalid value used in weak setweakTags.add({ name: "JavasScript" })let iOS = { name: "iOS" }weakTags.add(iOS)weakTags.has(iOS) // returns true, because it has that object presentweakTags.delete(iOS) // returns true, it successfully removed from the weaksetCan’t Read From a WeakSetWeakSets cannot be used with for…of and they offer no methods for reading values from it.let weakTags = new WeakSet()weakTags.add({ name: "JavasScript" })let iOS = { name: "iOS" }weakTags.add(iOS)// TypeError weakTags[Symbol.iterator] is not a functionfor (let wt of weakTags) { console.log(wt)}Using WeakSets to Show Unread PostsIf we can’t read values from a weakset, when should we use them?In a visual interface, we want to add a different background color to posts that have not yet been read.One way to “tag” unread posts is to change a property on each post object once they are read.let post = { // ... };// ... when post is clicked onpostList.addEventListener('click', (event) => { // ... post.isRead = true; // Mutates post object in order to indicate it's been read});// ... rendering list of posts// checks a property on each post objectfor (let post of postArray) { if(!post.isRead) { // adds css class on element if new _addNewPostClass(post.element); }}The issue with this code is that we are changing/mutating each post object unnecessarily. Using immutable objects inJavascript is a common practice that should be favored whenever possible. Doing so makes your code easier to understand,and leaves less room for errors.We can use WeakSets to create special groups from existing objects without mutating them. Favoring immutableobjects allows for much simpler code with no unexpected side effects.let readPosts = new WeakSet()// ... when post is clicked onpostList.addEventListener("click", event => { // ... readPosts.add(post) // Adds object to a group of read posts})// ... rendering postsfor (let post of postArray) { if (!readPosts.has(post)) { _addNewPostClass(post.element) }}While we can’t read values from a WeakSet, we can check to see if an object is present in the group.Level 5 - Classes and ModulesClassesAdding a Sponsor to the SidebarA common approach to encapsulation in JavaScript is using a constructor function.function SponsorWidget(name, description, url) { this.name = name this.description = description this.url = url}SponsorWidget.prototype.render = function() { // ...}Constructor functions are invoked with the new operator. Invoking the SponsorWidget function looks like this:let sponsorWidget = new SponsorWidget(name, description, url)sponsorWidget.render()Using the New Class SyntaxTo define a class, we use the class keyword followed by the name of the class. The body of a class is the part betweencurly braces.class SponsorWidget { render() { // ... }}Instance method definitions in classes look just like the method initializer shorthand in objects’.Initializing Values in the Constructor Functionclass SponsorWidget { // Runs every time a new instance is created with the new operator constructor(name, description, url) { // Assigning to instance variables make them accessible to other instance methods this.name = name this.description = description this.url = url } render() { // ... }}let sponsorWidget = new SponsorWidget(name, description, url)sponsorWidget.render()Accessing Class Instance Variablesclass SponsorWidget { constructor(name, description, url) { this.description = description this.url = url } render() { // this.url is an instance variable set in constructor let link = this._buildLink(this.url) // ... } _buildLink(url) { // .... }}There are no access modifiers like private or protected like there are in other languages.Prefixing a method with an underscore is a convention for indicating that it should not be invoked from the publicAPI.The class syntax is not introducing a new object model to JavaScript. It’s just syntactical sugar over the existingprototype-based inheritance (syntactical sugar).Class InheritanceWe can use class inheritance to reduce code repetition. Child classes inherit and specialize behavior defined inparent classes.The extends keyword is used to create a class that inherits methods and properties from another class. The supermethod runs the constructor function from the parent class.class Widget { constructor() { this.baseCSS = "site-widget" } parse(value) { // ... }}class SponsorWidget extends Widget { constructor(name, description, url) { super() // ... } render() { let parsedName = this.parse(this.name) let css = this._buildCSS(this.baseCSS) }}Overriding Inherited MethodsChild classes can invoke methods from their parent classes via the super object.class Widget { constructor() { this.baseCSS = "site-widget" } parse(value) { // ... }}class SponsorWidget extends Widget { constructor(name, description, url) { super() // ... } parse() { let parsedName = super.parse(this.name) return `Sponsor: ${parsedName}` } render() { // ... }}Super holds a reference to the parent version of the parse() method.Modules - Part 1Polluting the Global NamespaceThe common solution for modularizing code relies on using global variables. This increases the chances ofunexpected side effects and potential naming conflicts.These libraries shown below add to the global namespace.<!DOCTYPE html><body> <script src="./jquery.js"></script> <script src="./underscore.js"></script> <script src="./flash-messages.js"></script></body><script>// In our Javascript we simply reference these globally defined APIslet element = $("...").find(...);let filtered = _.each(...);flashMessage("Hello");</script>Global variables can cause naming conflicts.Creating ModulesLet’s create a new JavaScript module for displaying flash messages./* flash-messages.js */export default function(message) { alert(message)}/* app.js */// points to file with .js extension, which must be in the same folder// pulls in 'default' method from the imported fileimport flashMessage from "./flash-message"// call to flashMessage methodflashMessage("Hello")Modules still need to be imported via <script>, but no longer pollute the global namespace.<!DOCTYPE html><body> <script src="./flash-messages.js"></script> <script src="./app.js"></script></body></html>Can’t Default Export Multiple FunctionsModules give us total control over which methods we expose publicly. The default type export limits the number offunctions we can export from a module./* flash-messages.js */export default function(message) { alert(message)}// Not available outside this modulefunction logMessage(message) { console.log(message)}Using Named ExportsIn order to export multiple functions from a single module, we can use the named export./* flash-messages.js */export function alertMessage(message) { alert(message)}export function logMessage(message) { console.log(message)}/* app.js */import { alertMessage, logMessage } from "./flash-message"alertMessage("Hello from alert")logMessage("Hello from log")Importing a Module as an ObjectWe can also import the entire module as an object and call each funtion as a property from this object./* app.js */import * as flash from "./flash-message"flash.alertMessage("Hello from alert")flash.logMessage("Hello from log")Removing Repeated Export StatementsInstead of calling export statements every time we want to export (publicly expose) a function, we can instead exportthem as a list with a single command./* flash-messages.js */function alertMessage(message) { alert(message)}function logMessage(message) { console.log(message)}export { alertMessage, logMessage }Modules - Part 2Extracting Hardcoded ConstantsRefining constants across our application is unnecessary repetition and can lead to bugs./* load-profiles.js */function loadProfiles(userNames) { const MAX_USERS = 3 if (userNames.length > MAX_USERS) { // ... } const MAX_REPLIES = 3 if (someElement > MAX_REPLIES) { // ... }}export { loadProfiles }/* load-profiles.js */function listReplies(replies = []) { const MAX_REPLIES = 3 if (replies.length > MAX_REPLIES) { // ... }}export { listReplies }We cannot redefine constants within the same scope, but here we have 3 different functions with their own scope, so thiscode is correct. It’s still unnecessary duplication. The problem is that if we change one constant, then we have to findall the other ones and update them.Exporting ConstantsPlacing constants in their own module allows them to be reused across other modules and hides implementation details(a.k.a., encapsulation)./* constants.js */export const MAX_USERS = 3;export const MAX_REPLIES = 3;/* alternative constants.js */const MAX_USERS = 3;const MAX_REPLIES = 3;export { MAX_USERS, MAX_REPLIES };How to Import Constants**To import constants, we can use the exact same syntax for importing functions./* load-profiles.js */import { MAX_REPLIES, MAX_USERS } from "./constants"function loadProfiles(userNames) { if (userNames.length > MAX_USERS) { // ... } if (someElement > MAX_REPLIES) { // ... }}Exporting Class Modules With Default ExportClasses can also be exported from modules using the same syntax as functions. Instead of 2 individual functions, we nowhave 2 instance methods that are part of a class./* flash-message.js */export default class FlashMessage { constructor(message) { this.message = message } renderAlert() { alert(`${this.message} from alert`) } renderLog() { console.log(`${this.message} from log`) }}The default keyword allows this class to be set to any variable name once it’s imported.Using Class Modules with Default ExportImported classes are assigned to a variable using import and can then be used to create new instances./* app.js */import FlashMessage from "./flash-message"let flash = new FlashMessage("Hello")flash.renderAlert()flash.renderLog()Using Class Modules with Named ExportAnother way to export classes is to first define them, and then use the export statement with the class name insidecurly braces./* flash-message.js */class FlashMessage { // ...}export { FlashMessage }When using named export, the script that loads the module needs to assign it to a variable withthe same name as the class./* app.js */import { FlashMessage } from "./flash-message"let flash = new FlashMessage("Hello")flash.renderAlert()flash.renderLog()Level 6 - Promises, Iterators, and GeneratorsPromisesFetching Poll Results From the ServerIt’s very important to understand how to work with JavaScript’s single-thread model. Otherwise, we mightaccidentally freeze the entire app, to the detriment of user experience.Often users will click on a button, a link, or type within an input box, triggering some sort of Javascript action.While these actions occur, they might trigger other actions, such as fetching data from a back-end API.While we wait for a response, we still must be able to interact with the page. If we mess up and write bad code thatblocks the page, we can make elements non-responsive, affecting the user experience.Avoiding Code That BlocksOnce the browser blocks executing a script, it stops running other scripts, rendering elements and responding to userevents like keyboard and mouse interactions.// Page freezes until a value is returned// from the getPollResultsFromServer function.let results = getPollResultsFromServer("Sass vs. LESS")ui.renderSidebar(results)In order to avoid blocking the main thread of execution, we write non-blocking code like this:getPollResultsFromServer("Sass vs LESS", function(results) { ui.renderSidebar(results)})We are passing a callback to the function now, so that it can be responsible for calling the callback function when itreceives the response from the API server.Passing Callbacks to Continue ExecutionIn continuation-passing style (CPS) async programming, we tell a function how to continue execution by passingcallbacks.One issue is that this can grow to complicated nested code, resulting in error checking on every single callback.getPollResultsFromServer(pollName, function(error, results) { if (error) { //.. handle error } // ... ui.renderSidebar(results, function(error) { if (error) { //.. handle error } // ... sendNotificationToServer(pollName, results, function(error, response) { if (error) { //.. handle error } // ... doSomethingElseNonBlocking(response, function(error) { if (error) { //.. handle error } // ... }) }) })})The Best of Both Worlds With PromisesA Promise is a new abstraction that allows us to write async code in an easier way.getPollResultsFromServer("Sass vs. LESS") .then(ui.renderSidebar) .then(sendNotificationsToServer) .then(doSomethingElseNonBlocking) .catch(function(error) { console.log("Error: ", error) })This is still non-blocking, but not using nested callbacks anymore.Creating a New Promise ObjectThe Promise constructor function takes an anonymous function with 2 callback arguments known as handlers.function getPollResultsFromServer(pollName) { return new Promise(function(resolve, reject) { // called when the non-blocking code is done executing resolve(someValue) // called when an error occurs reject(someValue) })}Handlers are responsible for either resolving, or rejecting the Promise.The Lifecycle of a Promise ObjectCreating a new Promise automatically sets it to the pending state. Then, it can do 1 of 2 things: becomefulfilled or rejected.A Promise represents a future value, such as the eventual result of an asynchronous operation.let fetchingResults = getPollResultsFromServer("Sass vs. less")The fetchingResults variable contains the Promise object in the pending state.Resolving a PromiseLet’s wrap the XMLHttpRequest object API within a Promise. Calling the resolve() handler moves the Promise to afulfilled state.function getPollResultsFromServer(pollName) { return new Promise(function(resolve, reject) { let url = `/results/${pollName}` let request = new XMLHttpRequest() request.open("GET", url, true) request.onload = function() { if (request.status >= 200 && request.status < 400) { resolve(JSON.parse(request.response)) } } // ... request.send() })}Reading Results From a PromiseWe can use the then() method to read results from the Promise once it’s resolved. This method takes a function thatwill only be invoked once the Promise is resolved.function getPollResultsFromServer(pollName) { // ... resolve(JSON.parse(request.response)) // ...}let fetchingResults = getPollResultsFromServer("Sass vs Less")fetchingResults.then(function(results) { // renders HTML to the page ui.renderSidebar(results)})The callback passed to then() will receive the argument that was passed to resolve().Removing Temporary VariablesWe are currently using a temporary variable to store our Promise object, but it’s not really necessary. Let’sreplace it with chaining function calls.getPollResultsFromServer("Sass vs Less").then(function(results) { ui.renderSidebar(results)})Chaining Multiple ThensgetPollResultsFromServer("Sass vs Less") .then(function(results) { // only returns poll results from Orlando return results.filter(result => result.city === "Orlando") }) .then(function(resultsFromOrlando) { ui.renderSidebar(resultsFromOrlando) })Rejecting a PromiseWe’ll call the reject() handler for unsuccessful status codes and also when the onerror event is triggered on ourrequest object. Both move the Promise to a rejected state.function getPollResultsFromServer(pollName) { return new Promise(function(resolve, reject) { // ... request.onload = function() { if (request.status >= 200 && request.status < 400) { resolve(JSON.parse(request.response)) } else { reject(new Error(request.status)) } } request.onerror = function() { reject(new Error("Error Fetching Results")) } // ... request.send() })}Rejecting a Promise moves it to a rejected state.Catching Rejected PromisesOnce an error occurs, execution moves immediately to the catch() function. None of the remaining then() functionsare invoked.getPollResultsFromServer("Sass vs Less") .then(function(results) { // only returns poll results from Orlando return results.filter(result => result.city === "Orlando") }) .then(function(resultsFromOrlando) { ui.renderSidebar(resultsFromOrlando) }) .catch(function(error) { console.log("Error: ", error) })Passing Functions as ArgumentsWe can make our code more succinct by passing function arguments to then, instead of using anonymous functions.function filterResults(results) { // ... }// new method initializer shorthand syntaxlet ui = { renderSidebar(filteredResults){ // ... }};getPollResultsFromServer("Sass vs. Less") .then(filterResults) .then(ui.renderSidebar) .catch(function(error){ console.log("Error: ", error); });IteratorsWhat We Know About Iterables So FarArrays are iterable objects, which means we can use them with for…of.let names = ["Sam", "Tyler", "Brook"]for (let name of names) { console.log(name)}Plain JavaScript objects are not iterable, so they do not work with for…of out-of-the-box.let post = { title: "New Features in JS", replies: 19}// TypeError: post[Symbol.iterator] is not a functionfor (let p of post) { console.log(p)}Iterables Return IteratorsIterables return an iterator object. This object knows how to access items from a collection 1 at a time,while keeping track of its current position within the sequence.let names = ["Sam", "Tyler", "Brook"]for (let name of names) { console.log(name)}// what's really happening behind the sceneslet iterator = names[Symbol.iterator]()let firstRun = iterator.next()// firstRun: {done: false, value: "Sam"}let name = firstRun.valuelet secondRun = iterator.next()// firstRun: {done: false, value: "Tyler"}let name = secondRun.valuelet thirdRun = iterator.next()// firstRun: {done: false, value: "Brook"}let name = thirdRun.valueThe next() method is called by the loop. Once ‘done’ is true, the loop is ended.Understanding the next MethodEach time next() is called, it returns an object with 2 specific properties: done and value.done(boolean) Will be false if the iterator is able to return a value from the collection Will be true if the iterator is past the end of the collectionvalue(any) Any value returned by the iterator. When done is true, this returns undefined.{ done: true, value: undefined }The First Step Toward an Iterator ObjectAn iterator is an object with a next property, returned by the result of calling the Symbol.iterator method.let post = { title: "New Features in JS", replies: 19}post[Symbol.iterator] = function() { let next = () => { // ... } return { next }}// Cannot read property 'done' of undefinedfor (let p of post) { console.log(p)}Navigating the SequenceWe can use Object.keys to build an array with property names for our object. We’ll also use a counter (count) and aboolean flag (isDone) to help us navigate our collection.let post = { // ... }post[Symbol.iterator] = function() { let properties = Object.keys(this); // returns array with property names let count = 0; // used to access properties array by index let isDone = false; // set to true when done with the loop let next = () => { if (count >= properties.length) { isDone = true; } // 'this' refers to the post object return { done: isDone, value: this[properties[count++]] }; } return { next };};Running Our Custom IteratorWe’ve successfully made our plain JavaScript object iterable, and it can now be used with for…of.let post = { title: "New Features in JS", replies: 19};post[Symbol.iterator] = function() { // ... return {next};}// works properly nowfor let(p of post) { console.log(p);}// works with spread operator alsolet values = [...post];console.log(values); // ['New Features in JS', 19]Iterables With DestructuringLastly, destructuring assignments will also work with iterables.let [title, replies] = postconsole.log(title) // New Features in JSconsole.log(replies) // 19GeneratorsGenerator FunctionsThe *function ** declaration defines generator functions**. These are special functions from which we can use the*yield* keyword to return **iterator objects.function* nameList() { yield "Sam" // { done: false, value: "Sam" } yield "Tyler" // { done: false, value: "Tyler" }}It doesn’t matter where you place the star character in-between.function *nameList() { // ... }function* nameList() { // ... }function * nameList() { // ... }Generator Objects and for…ofGenerator functions return objects that provide the same next method expected by for…of, the spread operator,and the destructuring assignment.function* nameList() { yield "Sam" // { done: false, value: "Sam" } yield "Tyler" // { done: false, value: "Tyler" }}// nameList() returns a generator objectfor (let name of nameList()) { console.log(name)}let names = [...nameList()]console.log(names) // ["Sam", "Tyler"]let [first, second] = nameList()console.log(first, second) // Sam TylerReplacing Manual Iterator ObjectsKnowing how to manually craft an iterator object is important, but there is a shorter syntax.let post = { title: "New Features in JS", replies: 19 }post[Symbol.iterator] = function() { let properties = Object.keys(this) let count = 0 let isDone = false let next = () => { if (count >= properties.length) { isDone = true } return { done: isDone, value: this[properties[count++]] } } return { next }}Refactoring to Generator FunctionsEach time yield is called, our function returns a new iterator object and then pauses until it’s called again.let post = { title: "New Features in JS", replies: 19 }// generator functions can be anonymouspost[Symbol.iterator] = function*() { let properties = Object.keys(this) for (let p of properties) { yield this[p] }}// this is the same aspost[Symbol.iterator] = function*() { yield this.title yield this.replies}for (let p of post) { console.log(p)}// Output:// New Features in JS// 19"
} ,
{
"title" : "ES2015",
"category" : "",
"tags" : "",
"url" : "/resources/notes/javascript/es6/",
"date" : "",
"content" : "Javascript Fundamentals for ES6Table of Contents Overview Variables and Parameters Classes Functional Programming Built-In Objects Asynchronous Development in ES6 Objects in ES6 Modules Using ES6 TodayOverviewJavascript is an implementation of ECMAScript. The ES6 specificationthat defines the language was in draft status, with release in June 2015.ES6 introduces an extensive amount of new syntax to the language.The last time the specification tried to add this broadly to the languagewas with ECMAScript 4, but that project was abandoned.Some modifications were made with ECMAScript 5 / ES2009.Try It YourselfYou can use Plunker to test ES6 code. You can also do thesame directly in the Chrome JavaScript console using COMMAND + OPTION + J.You can also go to jsconsoleCompatibility ES6 Compatibility TableThe Repository ES6FundamentalsCourseFilesGet StartedVariables and ParametersLetLet allows us to define variables. We’ve always done this with the var keyword.However var has limitations when it comes to scope.With var there is only global scope, and function scope. Global scope is wherea variable is placed globally. Functional scope limits the scope of the variablewithin the function.However there isn’t block scope with var. In this code below, it looks likevar x = 3; might be scoped only to the block it’s defined in (within the ifstatement), but this isn’t the case.var doWork = function(flag) { if (flag) { var x = 3 } return x}This variable of x is actually made available throughout the function.Let provides us with true block scoping.var doWork = function(flag) { if (flag) { let x = 3 } return x}This code using let will return an error because x is not defined, even ifthe flag argument is true.describe("how let works" function(){ it("will provide block scoping, unlike var", function(){ var doWork = function(flag) { if(flag) { var x = 3; } return x; }; var result = doWork(true); expect(result).toBe(3); };});ConstRest ParametersDefault ParametersDestructuring AssignmentsClassesWe’ve always had the ability to create objects in JavaScript with propertiesand methods, however the new class syntax allows us to do that in familiar way,especially for those that have used Python, Java, C++, or Ruby.A class defines a blueprint for constructing objects, includes the pieces thatdefine the state, construction logic, and even setup inheritance relationshipsbetween objects.Functional ProgrammingThere are some new functional programming capabilities with JavaScript. It hasalways been a very functional language.Arrow FunctionArrow functions allow you to use a terse syntax to define functions. In manycases you do not need a return statement or curly braces.let add = (x, y) => x + yexpect(add(3, 5)).toBe(8)let numbers = function*(start, end) { for (let i = start; i <= end; i++) { console.log(i) yield i }}IteratorsGenerator FunctionsGenerator functions can create iterators.List Comprehension SyntaxBuilt-In ObjectsSet and Map collectionsNew data structures we can use in JavaScript, have memory friendly counterparts:the WeakMap and the WeakSet.New APIsObjects, arrays, numbers, and more.Asynchronous Development in ES6Promises, an object that will promise to give you the result of some asynchronousoperation in the future. There have been unofficial standards for the PromiseAPI, but now it has been standardized.Objects in ES6New APIs, new methods. New Metaprogramming capabilities.ModulesJavaScript has lacked a modules system, where unofficial community standardshave been in place such as CommonJs modules, or the Asynchronous ModuleDefinition (AMD) standard.Using ES6 TodayHow people are using ES6 today. Many teams are using tools like Traceur totranspile ES6 code to ES5 code."
} ,
{
} ,
{
"title" : "Git",
"category" : "",
"tags" : "",
"url" : "/resources/cheat-sheets/git/",
"date" : "",
"content" : "Back to Cheat SheetsOther Git TipsTable of Contents Man Pages Git Branching Git Checkout Git Commit Git Diff Git Log Git Push Git Rebase Git Remote Git Remove Git Reset Git Stash Git Update-Index Tagging Patching MiscMan Pages# Manuals# You can view the manual pages on any of the commands below by using# 'man git-' followed by the verbs supported by Git such as 'log' or 'commit'man git-logman git-blameman git-branchman git-checkoutman git-commitman git-diffman git-gcman git-logman git-pushman git-remoteman git-showman git-stashman git-tagGit Branching# view local branches with verbose output (includes remote tracking)git branch -vv# view remote branchesgit branch -r# view local and remote branchesgit branch -a# delete local branchgit branch -d my_branch_name# rename branchgit branch -m old_name new_name# delete a tracking branchgit branch -r -d otherguys/master# rename local and remote branchgit branch -m old_name new_namegit push origin :old_name # delete old remote branchgit push origin new_name # create new branch on remotegit branch --set-upstream-to=origin/new_name# push branch to remote and set as upstreamgit push --set-upstream remote branch-name# set upstream tracking for branchgit branch -u origin/feature_branchgit branch -set-upstream-to=origin/feature_branch# detect which branches contain a specific commitgit branch --contains 7688c3cGit Checkout# alternative way to clear all changesgit checkout .# clear all changes to one filegit checkout path/to/file# create and switch to new branchgit checkout -b my_branch_name# create and switch to new branch based on remote branchgit checkout -b local_branch_name origin/remote_branch_name# create a branch from an earlier commit (time travel is possible!)git checkout -b oldstuff commit_hash# create a branch that tracks a remote branchgit checkout -b branch_name remote_name/branch_name# merge selected files from another branchgit checkout frombranch thedir/thefile.txt anotherdir/anotherfile.txt# create local branch from remote branchgit checkout -b local_branch_name johndoe/remote_branch_nameGit Commit# update the last commit messagegit commit --amend -m "New message"# update the last commit with current date/timegit commit --amend --reset-author See Auto-squashing Git Commits StackOverflow # append staged changes into previous commitgit add .git commit --fixup=4321dcbagit rebase --interactive --autosquash 4321dcba^Git Diff# show unstaged changes since last commitgit diff# show staged and unstaged changes since last commitgit diff HEAD# show changes since second to last commitgit diff HEAD^# show changes since third to last commitgit diff HEAD^^# show changes since 5 commits agogit diff HEAD~5# show changes between most recent and second most recent commitgit diff HEAD^..HEAD# show changes between two commitsgit diff 4fb063f..f5a6ff9# show changes between tagged release and mastergit diff v1.0..master# show changes between two branchesgit diff master my-feature-branch# show changes using time rangesgit diff --since=1.week.ago --until=1.minute.ago# compare a file in different branchesgit diff mybranch main -- src/myfile.jsGit Log# search for all commits (any branch) by messagegit log --all --grep="contents of message"# view commits in oneline formatgit log --pretty=oneline# view commits in oneline format, with decorationsgit log --oneline --decorate# view last 20 logs in reverse with raw comment body only# good for reporting work performedgit log --reverse --pretty=format:"%B" -20# view all changes in specific file, ordered from most recent to oldestgit log -p path/to/file# view last 2 changes in specific filegit log -p -2 path/to/file# show statistics on changes to files in each commitgit log --stat# show visual representation of branch mergesgit log --graph# show log in custom format# %ad - author date# %an - author name# %h - SHA hash# %s subject# %d ref namesgit log --pretty=format:"%h %ad- %s [%an]"# show with date rangesgit log --until=1.minute.agogit log --since=1.day.agogit log --since=1.hour.agogit log --since=1.month.ago --until=2.weeks.agogit log --since=2000-01-01 --until=2012-12-21# setup and use alias for complex git commandsgit config --global alias.mylog "log --pretty=format:'%h %s [%an]' --graph"git mylog# find deleted file in historygit log --all --full-history -- <path-to-file>Git Merge-Base# Find the point at which a branch forked from another branchgit merge-base --fork-point master feature_branchGit Push# push to remote with upstream tracking specifiedgit push -u origin qa# push branch to remote repository (origin)git push origin my_branch_name# delete remote branchgit push origin :remote_branch_nameGit Rebase# interactive rebase from remote mastergit rebase -i origin/master# interactive rebase from last 4 commitsgit rebase -i HEAD~4Git Remote# view configured remote repositoriesgit remote -v# view information about a remotegit remote show origin# add remote repositorygit remote add johndoe https://github.com/johndoe/myproject.git# change remote repository for existing remotegit remote set-url origin https://github.com/USERNAME/REPOSITORY.git# remove remote repositorygit remote rm johndoe# deletes stale references from local repositorygit remote prune originGit Remove# remove file from repository, without actually deleting# good for files you only want locally, and have added to .gitignoregit rm --cached mylogfile.logGit Reset# clear all changesgit reset --hard# undo a successful merge or commitgit reset --hard HEAD^# undo a successful commit, keep changesgit reset --soft HEAD^Git Show# view changes in commit, using SHA hashgit show 6d3b08115028d013d676bc03ece72db3e6e06225# show last commitgit showgit show HEAD# show files involved in last commitgit show HEAD --name-onlyGit Stash# save current unstaged changes to stashgit stash# save current unstaged changes to stash with descriptiongit stash save <message># view list of stashesgit stash list# apply first stash to current branchgit stash apply stash@{0}# drop first stashgit stash drop stash@{0}# clear all stored stashesgit stash clearGit Update-Index# Apply Executable Permissions to a Filegit update-index --chmod=+x path/to/file# Stage a file and change permissions at the same time (Git v2.9)git add --chmod=+x path/to/file# View Files with Permissionsgit ls-files --stagePatching# create patch based on single commitgit format-patch -1 73699d42 --stdout > mycommit.patch# create patch based last 5 commitsgit format-patch cc1dde0dd^..6de6d4b06 --stdout > mycommit.patch# create patch file (auto generated name) for current feature branch,# using remote master as basegit format-patch origin/master# check for errors before apply patchgit apply --check file.patch# inspect / view statistics for patchgit apply --stat file.patch# apply a patchgit am file.patchTagging# update local tags from remotegit fetch origin --tags# list tagsgit show-ref --tags --abbrev# Tag with annotationgit tag -a v1.1 -m "version 1.1 (CodeName: Jason)"# Tag without annotationgit tag -a v1.2# Tag a specific commitgit tag -a v1.0 74a360f# delete git tag locallygit tag -d tagName# delete remote taggit push origin :refs/tags/tagNamegit push --delete origin tagName# modify git tag locallygit tag -a v1.23 04567899ae -f# force push all local tagsgit push origin --tags -fMisc# show log of commits affecting specific filegit whatchanged path/to/file# display revisions and author for each line of a file (lines 450 - 470)git blame -L 450,470 lib/file_name.rb# show commit SHA, author, and date for changes to filegit blame index.html --date short# apply changes from local commit to current branchgit cherry-pick 04567899ae36651daf3dfa117a1088d594632370# create a commit that reverts a previous commitgit revert 04567899ae36651daf3dfa117a1088d594632370# List tags sorted in descending order, include first 5 in outputgit tag -l -n1 --sort=-v:refname | head -n 5# Cleanup unnecessary files and optimize the local repositorygit gc"
} ,
{
"title" : "GNU/Linux",
"category" : "",
"tags" : "",
"url" : "/resources/cheat-sheets/gnu-linux/",
"date" : "",
"content" : "Back to Cheat SheetsUse the man command to read more about any of the following commands.For example, you can read more about the file command by running man file.Misc commands# Discover a files type (text, executable, etc)file /bin/bash# Create Tar Gzip archivetar -cvzf archive.tar.gz /path/to/folder/# scan network for hostssudo nmap -sS -O -v 192.168.0/24# View Manual Page for Executable Utilityman cp# Search for Manuals by Keywordman -k directories# view calendarcalgrep# reveal 10 lines before, and 20 lines after matching line for contextgrep -B10 -A20 'HTTP 404' /path/to/filetop# view running processes, including threadstop -Hsudo# list the sudo privileges for the current usersudo -ltmuxCommands# Create a named sessiontmux new -s session_name# List sessionstmux list-sessionstmux ls# Attach to a sessiontmux attach -t session_name# List infotmux info# List tmux commandstmux list-commands# List configured key bindings and commandstmux list-keysKey Bindings CTRL + B (the “PREFIX”) PREFIX % - Split panes vertically PREFIX " - Split panes horizontally PREFIX arrow_key - Move to another pane PREFIX + arrow_key - Hold the prefix while pressing to resize pane PREFIX z - Toggle between pane and fullscreen PREFIX c - Open a new window PREFIX number_key - Switch to a window by number PREFIX l - Toggle between current and last window PREFIX d - Detach from session"
} ,
{
"title" : "Introduction",
"category" : "",
"tags" : "",
"url" : "/resources/course/introduction/",
"date" : "2011-10-12 22:40:23 -0700",
"content" : "The pages which make up this course are intended for people who are completely new to website programming. They are designed to provide an explanation of the concepts which experienced web developers are already aware of, and thus provide the prerequisite material needed for a person to jump into a book on Ruby on Rails without being completely lost. In the case of this site, the articles are intended to serve as the prerequisite for the Rails 3 In Action book by Ryan Bigg and Yehuda Katz, which are available in print or ebook versions.As you go through the articles that make up this course you are going to encounter a large number of acronyms. The number of acronyms can be overwhelming, so we ask that you keep in mind that as you work with various website technologies these terms become common. Not all website developers are 100% fluent in the details of every technology, and all developers certainly reference articles on the internet when details on a technology are needed...especially when it comes to coding syntax.This course attempts to provide some simple explanations for technologies which can be explained with much more immense detail. If more detail is wanted, simply use the links provided which should mostly point to the corresponding Wikipedia.org article, although Wikipedia articles can be extremely dry and just as confusing. Feel free to use the 'Contact' page to provide me with any feedback or confusion you may be experiencing from these articles so that I may further elaborate on areas which are still not clearly explaining the needed concepts.[previous][parent][next]"
} ,
{
"title" : "Setting Up a Rails Development Environment for PC Users",
"category" : "",
"tags" : "",
"url" : "/resources/course/setting-up-rails-development-environment-for-pc-users/",
"date" : "2011-10-22 23:16:50 -0700",
"content" : "Ubuntu LinuxIf you're using a PC instead of a Mac, I recommend that you develop your Ruby on Rails application using Ubuntu Linux instead of Windows. Some Ruby Gems are dependent on software which is available for Linux or Mac OS X, but has not yet been ported to Windows (at least in a compatible manner). The Ruby on Rails community also consists of a majority of Linux or Mac OS X users, so most documentation that you find on various Rails related topics will provide instructions which apply to the commands for those operating systems, and not Windows. Overall using a Unix-based operating system to develop your Rails application will mean less headaches.Ubuntu is the easiest Linux distribution for use as a server or desktop operating system. It supports most hardware without requiring complex steps to make. There is also an abundance of documentation supporting the use of Ubuntu to perform various tasks.You can install Ubuntu without needing to prepare a separate partition on your hard drive via the Windows Installer for Ubuntu known as WUBI. WUBI places a file which contains the entire Ubuntu system in the root of your existing Windows partition, and configures your computer to provide you the choice of using Windows or Ubuntu when you start the computer.Alternatively you can run Ubuntu using Virtualbox inside of Windows, so you don't have to restart your computer each time you want to start working on your Rails application. This may cause your computer to run slow however, as it involves both operating systems being loaded into memory and run at the same time.Development EnvironmentYour development environment will consist of a text editor for modifying the source code of your application, a terminal which provides a command prompt (or command line interface) for running certain commands, and a web browser for viewing and testing the website application that you are developing on your local machine.Each of these programs we are going to use separately. I'm teaching it to you this way so that you can choose to use an alternative terminal or text editor later if you choose. Choice is good.Or if you find one that works for you, you can look into what is known as an Integrated Development Environment (IDE) which provides a text editor, terminal, web browser, and other tools all in one program. One example is RadRails by Aptana.There is also a web-based IDE tailored for Rails projects called Cloud9. It's free if you're using it with a project that you plan on sharing with the public as an open-source project for free on Github, otherwise you'll need to pay $15 a month.Setting up the Launcher Bar for your ProgramsThe latest version of Ubuntu uses a desktop interface known as Unity. This interface features a bar on the left side of the screen known as the Launcher. When you open any programs from the launcher, the launcher will disappear. If you place your mouse against the far left side of the screen, the launcher will reappear.Click on the 'Dash Home' icon at the top of the launcher to open up another feature of the Unity interface known as the 'Dash'. This area provides options to search and open programs which are installed within your Ubuntu system.Search for 'gEdit' and the Dash will display the 'Text Editor' program, which you'll be using as your text editor. Go ahead and drag this icon to your launcher on the left side of the screen. After the 'gEdit' icon appears in the launcher, feel free to click on the icon and hold for a second or two, then drag the icon nearer to the top of the list of icons.If you right-click on icons, such as 'LibreOffice Impress' (a Microsoft Powerpoint-like program) or 'LibreOffice Calc' (a Microsoft Excel-like program), you can choose 'Remove from launcher' to remove those icons from the launcher so that it's less cluttered.Go ahead and search for 'Terminal' from the dash, and drag this to your launcher.By default Ubuntu will come with Firefox installed. You might want to use Chromium, which is like the Linux version of Google Chrome. You can do this by opening the 'Ubuntu Software Center' program from the dash or your launcher, searching for 'Chromium' and then choosing to install it. After it's installed it will be available if you search for it in the dash, and then drag the icon to your launcher.Text EditorWhen you're spending most of your time coding, you want to make sure that your experience working with code is simple, easy on the eyes, and productive. On Mac OS X there is a program called Textmate which is very popular with programmers because it runs fast, includes a built in file browser, supports syntax coloring (which helps you avoid mistakes), is elegant and easy on the eyes, and includes code completion features.Unfortunately Textmate isn't available for Linux, however the text editor that comes with Ubuntu, gEdit, can be setup to work much like Textmate.Add Plugin Support to gEditFirst open the Ubuntu Software Center and search for 'gedit-plugins'. This will find a package with the description 'Set of plugins for gEdit'. Install this package.Download and Install Monaco FontDownload the Monaco font using this link. After the file is downloaded, open it up from your Home directory under the 'Downloads' folder. It should display examples of how the font looks, and provide an 'Install' button in the bottom right corner. Click on this button to install the font.Darkmate Theme for gEditDownload this Darkmate theme file for gEdit which emulates the color scheme of Textmate which is very easy on the eyes. Install the file by opening gEdit, hold down the ALT key to reveal the menu items at the top of the screen, go to Edit > Preferences > Fonts and Colors. Click on the 'Add' button and then navigate and choose the file from your Downloads folder. After it installs and applies the theme, click on the 'Close' button in the bottom right corner of the Preferences window.File Browser for gEditDownload the last version of the Class Browser plugin for gEdit (version 0.2.1) using this link. The file is a Tar/Gzip file, which is like a ZIP file that is supported by all the Linux-based operating systems. Open the file and click on the 'Extract' button. This will cause a window to popup and allow you to choose where you wish to extract the plugin folder. Choose your home folder, which bares your user name, and then click on CTRL+H to display the hidden files/folders. Open '.local', then 'share'. You'll see some folders in here, but likely won't see one for 'gedit'. Click on the 'Create folder' button in the top right section and create a folder named 'gedit'. This will place you inside of the folder. Next click on 'Create folder' once again and name it 'plugins'. Click on the 'Extract' button to place the plugin folder under 'plugins'.Open the gEdit text editor, hold down the ALT key and go to Edit > Preferences, then select the Plugins tab. You should have a check mark next to 'File Browser Panel'. Close the Preferences window, and while still holding the ALT key select View > Side Panel. This will cause the file browser to display on the left side of the screen.Installing RubyNow that you have a text editor which has feature which closely resemble Textmate, a Terminal which provides a command prompt, and a web browser, we're ready to install the libraries needed for the Ruby on Rails application.The first step is to install Ruby, which is the program which reads and interprets Ruby code from the scripts you create, an runs those scripts in real time.Ruby version 1.8.7 is recommended because it's known to be stable without causing any issues. Ruby 1.9.2 is also compatible with the latest version of Ruby on Rails (version 3.1), however it's fairly new, and a rule of thumb is that it is best to avoid the newest software if you're not a seasoned developer that can find the cause of a problem, find a solution, and report the bugs or issues that you've found.To install Ruby 1.8.7 for Ubuntu open the Terminal program and run 'sudo apt-get install ruby-full'.jason@ubuntu:/usr/bin$ sudo apt-get install ruby-fullReading package lists... DoneBuilding dependency tree Reading state information... DoneThe following extra packages will be installed: libtcltk-ruby1.8 ri1.8 ruby1.8-dev ruby1.8-full tcl8.5 tk8.5Suggested packages: tclreadlineThe following NEW packages will be installed: libtcltk-ruby1.8 ri1.8 ruby-full ruby1.8-dev ruby1.8-full tcl8.5 tk8.50 upgraded, 7 newly installed, 0 to remove and 0 not upgraded.Need to get 5,009 kB/5,635 kB of archives.After this operation, 66.2 MB of additional disk space will be used.Do you want to continue [Y/n]? YInstalling Ruby GemsMost programming languages provide standard features such as variables, operators, conditional statements, loops, etc. In addition to these standard features a core set of libraries are typically provided which provide functions for working with the file system, networking, etc.All these features and libraries are the building blocks of any computer program, however there are many common actions which you may want to perform with your program that you would need to program from scratch. Luckily there are programming libraries which are made for free to the public which provide methods for performing certain actions via your own Ruby scripts or Rails application.In some cases you simply download a library file, place it inside of the directories for your project, and then add some sort of command that points to the location of that library file so that its functions may be included and used by your own script.With common Ruby libraries, this isn't necessary because Ruby supports a program known as RubyGems which downloads and installs libraries for you, and then makes those libraries available for inclusion in your scripts without having to know the path of the libraries. These libraries are known as 'gems' in the Ruby community, and the Ruby Gem program relies on the centralized repository of gems which are hosted at http://rubygems.org/.Programs such as Ruby Gems are known as package managers, and are available for many other programming languages as well. For the Perl programming language there is a package manager and repository made available by the Comprehensive Perl Archive Network - CPAN.org. For the PHP programming language there is a package manager and repository made available by the PHP Extension and Application Repository - PEAR.PHP.NET.Rails itself is a Gem, as are the many libraries which the Rails framework relies on (ActiveModel, ActiveResource, ActiveSupport, etc). When you install Rails, these other gems are installed as dependencies, which means that the Rails gem needs them so Ruby Gems installs them as well.To install Ruby Gems on your Ubuntu system you could use the Ubuntu Software Center by searching for 'rubygems', but I'm advising against this because you're not able to use commands such as 'gem --update system' to update Ruby Gems itself.The best thing to do is to go to the RubyForge download page for Ruby Gems and download the latest 'tgz' package. Currently this is rubygems-1.8.10.tgz. Download the package, and then open your Terminal program and run 'cd ~/Downloads/' to switch to the 'Downloads' folder under your home directory. As shown below I used the 'ls' command to view the files in my Downloads folder, which showed that the file was present. Next 'tar -zxvf rubygems-1.8.10.tgz' is the command to extract the contents of the file to the same directory.jason@ubuntu:~$ cd ~/Downloads/jason@ubuntu:~/Downloads$ lsrubygems-1.8.10.tgzgedit_classbrowser-0.2.1 MONACO.TTFjason@ubuntu:~/Downloads$ tar -zxvf rubygems-1.8.10.tgzrubygems-1.8.10/rubygems-1.8.10/.autotestrubygems-1.8.10/.documentrubygems-1.8.10/.gemtestrubygems-1.8.10/bin/...After this completes use 'cd rubygems-1.8.10.tgz' to switch to the new directory that was just created. From within this directory run 'sudo ruby setup.rb' to install Ruby Gems.jason@ubuntu:~/Downloads/rubygems-1.8.10$ sudo ruby setup.rb [sudo] password for jason: RubyGems 1.8.10 installed== 1.8.10 / 2011-08-25RubyGems 1.8.10 contains a security fix that prevents malicious gems fromexecuting code when their specification is loaded. Seehttps://github.com/rubygems/rubygems/pull/165 for details.* 5 bug fixes: * RubyGems escapes strings in ruby-format specs using #dump instead of #to_s and %q to prevent code injection. Issue #165 by Postmodern * RubyGems attempt to activate the psych gem now to obtain bugfixes from psych. * Gem.dir has been restored to the front of Gem.path. Fixes remaining problem with Issue #115 * Fixed Syck DefaultKey infecting ruby-format specifications. * `gem uninstall a b` no longer stops if gem "a" is not installed.------------------------------------------------------------------------------RubyGems installed the following executables: /usr/bin/gem1.8This installs Ruby Gems, but doesn't make it available by simply using 'gem' as the command. Run the command 'sudo ln -s /usr/bin/gem1.8 /usr/bin/gem' to create a shortcut to gem1.8 using the command 'gem'.jason@ubuntu:~$ sudo ln -s /usr/bin/gem1.8 /usr/bin/gem[sudo] password for jason: jason@ubuntu:~$ gem --version1.8.10To ensure that it's all up-to-date, run the command 'gem update --system'.jason@ubuntu:~$ sudo gem update --systemUpdating rubygems-updateFetching: rubygems-update-1.8.11.gem (100%)Successfully installed rubygems-update-1.8.11Installing RubyGems 1.8.11RubyGems 1.8.11 installed== 1.8.11 / 2011-10-03* Bug fix: * Deprecate was moved to Gem::Deprecate to stop polluting the top-level namespace.------------------------------------------------------------------------------RubyGems installed the following executables: /usr/bin/gem1.8RubyGems system software updatedjason@ubuntu:~$Installing RailsFrom the Terminal window run the command 'sudo gem install rails'. You'll be prompted to enter your password for your Ubuntu user account. Type this in and press enter again. Remember that sometimes you'll be prompted from the command line to enter your password, and you won't see stars or characters to indicate each character of your password that you're typing in. Just type it in and press ENTER. If you've typed it wrong, try again until it works.The output you get should be similar to this:jason@ubuntu:~$ sudo gem install rails[sudo] password for jason:Fetching: multi_json-1.0.3.gem (100%)Fetching: activesupport-3.1.1.gem (100%)Fetching: builder-3.0.0.gem (100%)Fetching: i18n-0.6.0.gem (100%)Fetching: activemodel-3.1.1.gem (100%)Fetching: rack-1.3.5.gem (100%)Fetching: rack-cache-1.1.gem (100%)Fetching: rack-test-0.6.1.gem (100%)Fetching: rack-mount-0.8.3.gem (100%)Fetching: hike-1.2.1.gem (100%)Fetching: tilt-1.3.3.gem (100%)Fetching: sprockets-2.0.3.gem (100%)Fetching: erubis-2.7.0.gem (100%)Fetching: actionpack-3.1.1.gem (100%)Fetching: arel-2.2.1.gem (100%)Fetching: tzinfo-0.3.30.gem (100%)Fetching: activerecord-3.1.1.gem (100%)Fetching: activeresource-3.1.1.gem (100%)Fetching: mime-types-1.16.gem (100%)Fetching: polyglot-0.3.2.gem (100%)Fetching: treetop-1.4.10.gem (100%)Fetching: mail-2.3.0.gem (100%)Fetching: actionmailer-3.1.1.gem (100%)Fetching: rake-0.9.2.2.gem (100%)Fetching: thor-0.14.6.gem (100%)Fetching: rack-ssl-1.3.2.gem (100%)Fetching: json-1.6.1.gem (100%)Building native extensions. This could take a while...Fetching: rdoc-3.11.gem (100%)Depending on your version of ruby, you may need to install ruby rdoc/ri data:= 1.9.2 : nothing to do! Yay!Fetching: railties-3.1.1.gem (100%)Fetching: bundler-1.0.21.gem (100%)Fetching: rails-3.1.1.gem (100%)Successfully installed multi_json-1.0.3Successfully installed activesupport-3.1.1Successfully installed builder-3.0.0Successfully installed i18n-0.6.0Successfully installed activemodel-3.1.1Successfully installed rack-1.3.5Successfully installed rack-cache-1.1Successfully installed rack-test-0.6.1Successfully installed rack-mount-0.8.3Successfully installed hike-1.2.1Successfully installed tilt-1.3.3Successfully installed sprockets-2.0.3Successfully installed erubis-2.7.0Successfully installed actionpack-3.1.1Successfully installed arel-2.2.1Successfully installed tzinfo-0.3.30Successfully installed activerecord-3.1.1Successfully installed activeresource-3.1.1Successfully installed mime-types-1.16Successfully installed polyglot-0.3.2Successfully installed treetop-1.4.10Successfully installed mail-2.3.0Successfully installed actionmailer-3.1.1Successfully installed rake-0.9.2.2Successfully installed thor-0.14.6Successfully installed rack-ssl-1.3.2Successfully installed json-1.6.1Successfully installed rdoc-3.11Successfully installed railties-3.1.1Successfully installed bundler-1.0.21Successfully installed rails-3.1.131 gems installedInstalling ri documentation for multi_json-1.0.3...Installing ri documentation for activesupport-3.1.1...Installing ri documentation for builder-3.0.0...Installing ri documentation for i18n-0.6.0...Installing ri documentation for activemodel-3.1.1...Installing ri documentation for rack-1.3.5...Installing ri documentation for rack-cache-1.1...Installing ri documentation for rack-test-0.6.1...Installing ri documentation for rack-mount-0.8.3...Installing ri documentation for hike-1.2.1...Installing ri documentation for tilt-1.3.3...Installing ri documentation for sprockets-2.0.3...Installing ri documentation for erubis-2.7.0...Installing ri documentation for actionpack-3.1.1...Installing ri documentation for arel-2.2.1...Installing ri documentation for tzinfo-0.3.30...Installing ri documentation for activerecord-3.1.1...Installing ri documentation for activeresource-3.1.1...Installing ri documentation for mime-types-1.16...Installing ri documentation for polyglot-0.3.2...Installing ri documentation for treetop-1.4.10...Installing ri documentation for mail-2.3.0...Installing ri documentation for actionmailer-3.1.1...Installing ri documentation for rake-0.9.2.2...Installing ri documentation for thor-0.14.6...Installing ri documentation for rack-ssl-1.3.2...Installing ri documentation for json-1.6.1...Installing ri documentation for rdoc-3.11...Installing ri documentation for railties-3.1.1...Installing ri documentation for bundler-1.0.21...Installing ri documentation for rails-3.1.1...Installing RDoc documentation for multi_json-1.0.3...Installing RDoc documentation for activesupport-3.1.1...Installing RDoc documentation for builder-3.0.0...Installing RDoc documentation for i18n-0.6.0...Installing RDoc documentation for activemodel-3.1.1...Installing RDoc documentation for rack-1.3.5...Installing RDoc documentation for rack-cache-1.1...Installing RDoc documentation for rack-test-0.6.1...Installing RDoc documentation for rack-mount-0.8.3...Installing RDoc documentation for hike-1.2.1...Installing RDoc documentation for tilt-1.3.3...Installing RDoc documentation for sprockets-2.0.3...Installing RDoc documentation for erubis-2.7.0...Installing RDoc documentation for actionpack-3.1.1...Installing RDoc documentation for arel-2.2.1...Installing RDoc documentation for tzinfo-0.3.30...Installing RDoc documentation for activerecord-3.1.1...Installing RDoc documentation for activeresource-3.1.1...Installing RDoc documentation for mime-types-1.16...Installing RDoc documentation for polyglot-0.3.2...Installing RDoc documentation for treetop-1.4.10...Installing RDoc documentation for mail-2.3.0...Installing RDoc documentation for actionmailer-3.1.1...Installing RDoc documentation for rake-0.9.2.2...Installing RDoc documentation for thor-0.14.6...Installing RDoc documentation for rack-ssl-1.3.2...Installing RDoc documentation for json-1.6.1...Installing RDoc documentation for rdoc-3.11...Installing RDoc documentation for railties-3.1.1...Installing RDoc documentation for bundler-1.0.21...Installing RDoc documentation for rails-3.1.1...jason@ubuntu:~$[previous][next]"
} ,
{
"title" : "Static and Dynamic Resources, and Rewrite Engines",
"category" : "",
"tags" : "",
"url" : "/resources/course/static-dynamic-resources-rewrite-engines/",
"date" : "2011-10-12 22:03:18 -0700",
"content" : "When requesting a single webpage there may be several requests and responses for each resource. A resource being requested may be an HTML coded web page, which then leads to several images which must be requested and loaded inside of the page, as well as other files such as Javascript or CSS libraries needed by the page. This sequence of requests-response transactions which take place between an HTTP client (browser) and an HTTP web server is known as an HTTP session.Static ResourcesStatic resources, such as files containing pure HTML coding, are accessed from the server in the same form as they are on the server itself. For example, many web hosting companies provide access to the files on the server via a File Transfer Protocol (FTP) server program that is also running on the web server. The account owner uses an FTP client program such as FileZilla or Dreamweaver to access and upload files to the directory where the website files reside. The FTP account may provide a directory called 'public_html' or 'public', which represents the web root for the public website. Files put directly into this folder will be available directly from the root of the website.The reason why the folder which contains the website files is a subdirectory of the root folder for the account is so that files which you do not want to be accessed via your website publicly can be uploaded to your account outside of the public_html folder.For a website using 'www.example.com', a file named 'index.html' can be placed in this directory and will be accessible just by going to http://www.example.com/ . This occurs because the web server is configured to provide any files named 'index.html', as the root webpage for the site. Such a file must be overwritten via FTP with a newer version before the content of that webpage changes for the public. If a folder is created in the 'public_html' directory named 'news', and then a file is placed in that directory named 'updates.html', the URL to access the file would be http://www.example.com/news/updates.htmlDynamic ResourcesResources provided by a web server may also be dynamic, which means that the contents of the resource/page are generated on-the-fly by coding placed on the server. This type of coding is known as server side scripting because it is executed on the server and influences the response from the web server before it reaches the clients web browser. This is in contrast with Javascript coding which is a client side scripting language that is executed inside of the web browser, commonly used to modify or animate elements of the web page in the browser in real time. HTML and CSS coding are also interpreted by the browser on the client side, but are not technically considered scripting. HTML is a markup language used to describe elements which are rendered on a page, and cascading style sheets (CSS) is a language used to describe the style/presentation of those elements defined by the HTML.One of the most popular server side coding options available is PHP. Most hosting companies configure their servers to scan files for PHP coding when the files have filenames ending in '.php' instead of the typical '.html' file extension. When found, this PHP coding is executed and dynamically generates certain portions of the page which are provided in the response. This scanning by the web server is also referred to as 'parsing'. This allows PHP coding to be placed in the middle of a file that is predominantly coded in HTML. The HTML coding is provided as is, but the PHP portions of the page are executed and replaced with any text output by the PHP coding.Although dynamic coding is typically used to output text content such as HTML, CSS, or Javascript coding in a dynamic function, it's possible for scripts to also return images if an image library is in use for a specific purpose. This is less likely, but still possible, which is why I've referred to them as static or dynamic resources instead of static or dynamic web pages.Rewrite EnginesMore advanced website applications do not operate within this paradigm where the URL of a specific resource corresponds directly with specific directories or files on the web server. For instance the Wordpress blogging system relies on an extension of the Apache web server known as mod_rewrite to direct all requests directly to the main index.php file located in the root of the directory which stores the Wordpress system files. The index.php file is programmed to compare the request, such as '/2010/02/how-to-tie-your-shoe' to a list of routing rules which determine which internal resource within the Wordpress system should be served, such as a page or blog post.This method makes it so that the website URL for each page is within complete control of the web application instead of the web server it is running under. This method allows website developers to specify URL's for each section of the site which enhance the search engine optimization for those pages. For example a system not using a rewrite engine might provide pages dynamically using the following address:http://www.example.com/index.php?p=5In the above URL 'p=5' is telling the index.php file to provide page ID '5' in the response. Search engines do not prefer to follow links with parameters such as this, as they know all the content is being dynamically fed via scripting driven by a database. They prefer to index pages which were created manually and placed into directories with URL's ending like /about/ or /about/index.html. Rewrite engines make it possible to map such URL's directly to pages which are dynamically generated via your database driven blog, or content management system.The example page provided by a system using a rewrite engine might provide the page via this URL:http://www.example.com/seo-friendly-page-title-goes-here/[previous][parent][next]"
} ,
{
"title" : "Capistrano Deployment for Apache2 with Passenger",
"category" : "",
"tags" : "",
"url" : "/resources/course/capistrano-deployment-for-apache2-with-passenger/",
"date" : "2011-12-29 12:31:19 -0800",
"content" : "This page outlines how to configure a Ruby on Rails 3 application with the Capistrano gem, and deploy your application to a remote server running Apache2 and the Passenger (mod_rails) extension.Preparing the ServerCreating Remote User AccountFirst we need to setup the server to host the Rails application. This is done by creating a user account which will have a home directory to store the files being deployed to the server. The following commands will create the user account with a home directory, and provide the user with Bash shell access.useradd -m testuser -s /bin/bashpasswd testuserAdding Public Key to Remote UserAt some point you should have created an SSH key pair for use with the Git repository you're using. Log into the remote server as the new user you've created, and place the contents of your public key file (~/.ssh/id_rsa.pub) inside of the file located at ~/.ssh/authorized_keys under the remote account.Before the '.ssh' folder exists in the remote account, we should create an SSH key pair for the remote account too.testuser@remoteserver:~$ pwd/home/testusertestuser@remoteserver:~$ ssh-keygen -t dsaGenerating public/private dsa key pair.Enter file in which to save the key (/home/testuser/.ssh/id_dsa): Created directory '/home/testuser/.ssh'.Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/testuser/.ssh/id_dsa.Your public key has been saved in /home/testuser/.ssh/id_dsa.pub.The key fingerprint is:8e:69:75:a8:ba:c0:9a:a2:90:67:69:40:48:14:64:37 testuser@remoteserverThe key's randomart image is:+--[ DSA 1024]----+|o*.E ||+ b . ||.. ||. . ||. S . || + . 3 . ||o x = . ||o* . n ||* o. |+-----------------+Once this is completed, you can switch to the .ssh folder and create the 'authorized_keys' folder, and paste your own public key onto the first line inside of the file.testuser@remoteserver:~$ cd .sshtestuser@remoteserver:~/.ssh$ nano authorized_keysWith your local computers private key added to the 'authorized_keys' file of the remote account, you'll be able to login to the remote account via SSH from your local computer without a password (or at least using the password setup with your SSH key pair).Next copy the public key under ~/.ssh/id_dsa.pub on the remote server, and configure your Git repository to with the remote users public key so that the remote user may download updates from the Git repository.Setting up Deployment ScriptFor our Rails application we will use Capistrano to deploy updates to our application to the live server, with the server configured to use Passenger (mod_rails) with Apache2.Assuming that the Rails application has already been setup on your local machine in a directory, initialized to be a local Git repository that is configured to push to an upstream server, we first need to setup the Rails application to be used with the Capistrano gem.Install Capistrano GemRun the following from the command line to install the Capistrano gem if you haven't already.gem install capistranoConfigure Rails Application to Require CapistranoOpen your Gemfile in your Rails application root folder and add the following on a new line so that Capistrano is available in the Rails application environment.gem 'capistrano'While you have the Gemfile open, also add the 'execjs' and 'therubyracer' gem requirements in the assets group to ensure that the application deploys successfully without errors with the new Sprockets/Asset Pipeline system which precompiles assets (images, stylesheets, Javascript) under the /public folder when pushing your application out to the remote server.group :assets do gem 'sass-rails', '~> 3.1.5' gem 'coffee-rails', '~> 3.1.1' gem 'uglifier', '>= 1.0.3' gem 'execjs' gem 'therubyracer'endAdd Deployment Configuration to ApplicationFrom the root directory of your application, run the following command to setup your application to use Capistrano for deployment. This adds a file named 'deploy.rb' under your 'config' folder.capify .Open the deploy.rb file and paste the following deployment recipe into the file, and then change the repository address, usernames, passwords, and other items as needed.############################################################## Application#############################################################set :application, "testapplication"############################################################## Settings############################################################## runs 'bundle install' during deployment# precompiles assets in productionrequire "bundler/capistrano"load "deploy/assets"default_run_options[:pty] = true # Must be set for the password prompt from git to workset :use_sudo, false############################################################## Servers#############################################################set :user, "testuser" # The remote server user for deploysset :scm_passphrase, "remoteuserpassword" # The remote server user's passwordset :deploy_to, "/home/#{user}"set :ssh_options, { :forward_agent => true }set :domain, "myapplicationdomain.com"server domain, :app, :webrole :db, domain, :primary => true############################################################## Git#############################################################set :scm, :gitset :repository, "git@git.myrepository.com:testapp.git"set :branch, "master"set :deploy_via, :remote_cache############################################################## Passenger#############################################################namespace :passenger do desc "Restart Application" task :restart do run "touch #{current_path}/tmp/restart.txt" endendafter :deploy, "passenger:restart"You can view a list of Capistrano tasks which are available, much like Rake tasks by using the 'cap -T' command.$ cap -Tcap bundle:install # Install the current Bundler environment.cap deploy # Deploys your project.cap deploy:assets:clean # Run the asset clean rake task.cap deploy:assets:precompile # Run the asset precompilation rake task.cap deploy:check # Test deployment dependencies.cap deploy:cleanup # Clean up old releases.cap deploy:cold # Deploys and starts a `cold' application.cap deploy:migrate # Run the migrate rake task.cap deploy:migrations # Deploy and run pending migrations.cap deploy:pending # Displays the commits since your last deploy.cap deploy:pending:diff # Displays the `diff' since your last deploy.cap deploy:restart # Blank task exists as a hook into which to inst...cap deploy:rollback # Rolls back to a previous version and restarts.cap deploy:rollback:code # Rolls back to the previously deployed version.cap deploy:setup # Prepares one or more servers for deployment.cap deploy:start # Blank task exists as a hook into which to inst...cap deploy:stop # Blank task exists as a hook into which to inst...cap deploy:symlink # Updates the symlink to the most recently deplo...cap deploy:update # Copies your project and updates the symlink.cap deploy:update_code # Copies your project to the remote servers.cap deploy:upload # Copy files to the currently deployed version.cap deploy:web:disable # Present a maintenance page to visitors.cap deploy:web:enable # Makes the application web-accessible again.cap invoke # Invoke a single command on the remote servers.cap passenger:restart # Restart Applicationcap shell # Begin an interactive Capistrano session.Some tasks were not listed, either because they have no description,or because they are only used internally by other tasks. To see alltasks, type `cap -vT'.Extended help may be available for these tasks.Type `cap -e taskname' to view it.Run 'cap deploy:setup' to configure the remote account to contain the folders needed to host the various folders and files needed for our deployed application.$ cap deploy:setup * executing `deploy:setup' * executing "mkdir -p /home/testuser /home/testuser/releases /home/testuser/shared /home/testuser/shared/system /home/testuser/shared/log /home/testuser/shared/pids" servers: ["myapplicationdomain.com"] [myapplicationdomain.com] executing command command finished in 122ms * executing "chmod g+w /home/testuser /home/testuser/releases /home/testuser/shared /home/testuser/shared/system /home/testuser/shared/log /home/testuser/shared/pids" servers: ["myapplicationdomain.com"] [myapplicationdomain.com] executing command command finished in 100msIf this runs successfully, like the example above, use 'cap deploy' to push out the current version of your application in the Git repository to the remote server, which will begin with something like this:$ cap deploy * executing `deploy' * executing `deploy:update' ** transaction: start * executing `deploy:update_code' updating the cached checkout on all servers executing locally: "git ls-remote git@git.myrepository.com:testapp.git master" command finished in 1975ms...After deploying for the first time, you're then free to configure Apache with a configuration similar to the following. With Passenger you simply need to point to the 'public' folder under the 'current' folder which stores the most current deployed version of your application.<VirtualHost *:80> ServerAdmin your@emailaddress.com ServerName myapplicationdomain.com ServerAlias www.myapplicationdomain.com DocumentRoot /home/testuser/current/public <Directory "/home/testuser/current/public"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory></VirtualHost>"
} ,
{
"title" : "Web Hosting",
"category" : "",
"tags" : "",
"url" : "/resources/course/web-hosting/",
"date" : "2011-10-12 19:56:18 -0700",
"content" : "Data CentersA website is hosted on a web server, accessible via a network such as the Internet or a private local area network through an Internet address known as a Uniform Resource Locator (URL). All publicly accessible websites collectively constitute the World Wide Web. A majority of web servers are located in facilities known as data centers, which providing the space, power, air cooled environment, and bandwidth needed to optimally run a web server with optimal uptime.Hosting TypesThere are several types of hosting you can choose from when deciding to setup a website. These include shared hosting, dedicated hosting, colocation, virtual private servers (VPS), or cloud hosting.Dedicated HostingDedicated hosting is where you pay a hosting company to provide a web server machine to you for the duration of the time you are paying for the service. They build the machine, install the operating system you want (Linux or Windows) and set it up in the data center for you. Typically they also setup their monitoring system to periodically check and make sure your server is still up and running. Another upside to dedicated hosting is that you have full control over the server, instead of limited access to one account like you do with shared hosting.Dedicated hosting costs at least $80 or more per a month, and is only really necessary for businesses which are hosting multiple websites, or a single website which receives a great amount of traffic and/or requires the ultimate amount of uptime. If your website is a core function of your business (such as an online shopping cart system), then you may definitely want to upgrade to dedicated or VPS hosting to ensure optimal uptime of your website(s).ColocationIf you're wanting dedicated hosting, but at a slightly lower cost, or you have a special web server hardware setup, then you'll want to consider colocation. Like dedicated hosting the data center provides the space, power, and bandwidth needed, but you provide the web server yourself. The slightly lower cost may be beneficial, but it can also be difficult in resolving an issue when the data center staff are unfamiliar with your servers hardware and if they do not have compatible parts in stock to repair the server in the event of an issue.Shared HostingShared hosting is where you pay a hosting company for an account on a web server that is already setup and likely hosting many other websites. This is why it's called shared hosting, because you are sharing the server with other websites. This type of hosting is much cheaper than dedicated hosting, at a cost of $6 - $20 per a month depending on the plan you sign up for.Unfortunately however this type of hosting has its downsides. If another website on the server receives a huge amount of traffic all at once, it could slow down or even stop the server from serving your own websites requests. At the same time a shared server may be under utilized by the other websites, so you might have a well responsive website for a fraction of the cost. It's a gamble though, and typically hosting companies like to keep shared servers well utilized to optimize profitability. Because of this shared hosting is only recommended for personal websites, or small organizations where uptime is not critical to the day-to-day operations of the business.Virtual Private ServerA virtual private server is like a mix between dedicated and shared hosting. Like dedicated hosting, you have full access to the server. You can choose the operating system, management system (such as cPanel), or install all the software you want/need from scratch via an Secure Shell (SSH) login to the command line of the server. Like shared hosting, your hosting environment is just one of many on the actual server hardware, however the software running on the server which provides a virtualized server environment is typically designed to limit other VPS servers running from the same machine from hogging the resources (CPU cycles, memory) of the server. This results in a more stable hosting environment than shared hosting, but at a fraction of the cost of a dedicated server.While a Virtual Private Server may be more stable than shared hosting, it is also more limited in the resources available to it than a dedicated server, and thus is best suited for individuals or small organizations which receive light traffic, and yet still require optimal uptime for their website.A VPS is the hosting type recommended for individuals which want to host their first Ruby on Rails project. You have total control over the server, so you can install any gems needed, or configure the server however you need. I highly recommend Linode as a VPS hosting provider. Their service has the best uptime I've ever seen, the tools they provide to manage your VPS are superior, and their documentation is really great also. For instance here is a guide on setting up a VPS to host a Ruby on Rails application using the Passenger mod with the Apache web server under Ubuntu 10.10 Maverick.Cloud HostingThe 'cloud' in cloud hosting is used as a metaphor to represent the Internet, based on the cloud drawing typically used in network diagrams to represent the Internet. The term is currently a buzz word in the IT world, and the definition of what it is has become somewhat vague.Unlike the previous types of hosting outlined above, cloud hosting is not provided by a single machine, but instead by a suite of various computing resources which are virtualized and accessed as a service.For instance, the Amazon S3 (Simple Storage Service) provides simple file hosting. A special API is used to upload images to an Amazon S3 account in special containers known as 'buckets', and then these files/images may be linked to using the appropriate URL for each file. Some websites are setup to host certain assets such as images or file downloads with S3, while still hosting the web pages from a standard web server.One reason why websites choose to use Amazon S3 instead of hosting these assets from the same web server is because it allows web pages to load faster, as most web browsers will only download three files at a time from a single web server. Additionally the Amazon S3 service provides scalability, meaning that no matter how much traffic or space is needed for your files, Amazon S3 has the infrastructure to handle the hosting. This means that a website administrator won't have to worry about migrating all the files to another high capacity server at some point in the future. The service is also provided with high availability, low latency, at a commodity cost.Cloud services may be used individually, or configured to work together to provide an environment much like a VPS. The upside is scalability and high availability, however the cost may exceed that of a VPS depending on your needs. Using cloud hosting services together can require some amount of setup and configuration, and thus companies such as RightScale or Engine Yard have stepped in to combine Amazon cloud services into a single solution. An open source platform known as Scalr is also available as a low cost solution. Cloud services are of course also provided by companies other than Amazon such as Rackspace.Cloud hosting is recommended for organizations which require optimal uptime, and likely expect to grow beyond the capabilities of even the most powerful dedicated servers in the near future. [previous][parent][next]"
} ,
{
"title" : "Web Technologies",
"category" : "",
"tags" : "",
"url" : "/resources/course/web-technologies/",
"date" : "2011-10-13 09:58:29 -0700",
"content" : "Web BrowsersThe majority of requests made to a web server are from a client computer running a web browser such as Google Chrome, Mozilla Firefox, or Microsoft Internet Explorer. Web browsers and web servers are both designed to communicate using the Hypertext Transfer Protocol (HTTP).Websites are also accessed from web servers by special software created by search engines called robots, such as Googlebot. This software is used by search engines to traverse all the pages, through the links on each page, and index all the content from the websites so that the content may be searched via the search engine.Web ServersThe most popular web server in use is Apache, which hosted approximately 54% of all domains on the Internet as of June 2010. Apache is provided by an open source software foundation and runs on Linux/Unix, Windows, and has been ported to run under Mac OS X (a Unix based operating system).The second most popular web server is Internet Information Services (IIS) provided by Microsoft, and only runs on Windows, and is used to host approximately 26% of domains on the Internet.Third is a new contender called Nginx which is known for being lightweight (using low memory of server machine) and efficient, counted as hosting 5.44% of websites in June 2010. Nginx is also provided via an open source project and runs on practically all operating systems (Linux, Unix, Windows, OS X).HTMLHypertext Markup Language (HTML) is not a programming language, but is a markup language used to describe webpages. A programming language is able to receive input, perform calculations, and output the results. A markup language is only used to describe how a document should be displayed. In the browser an HTML page looks like a document, but the file itself is only text coding which conforms to the standards of the HTML language.HTML standards evolve as time passes. Currently the most popular versions of HTML in use on the Internet are HTML 4, and XHTML 1.1.eXtensible HyperText Markup Language (XHTML) is like HTML, but conforms to the stricter standards of XML. XHTML was introduced to make HTML more extensible, and increase interoperability with other XML based data formats.HTML5 is currently being finalized as the new standard for websites.Learn HTML through W3Schools.eXtensible Markup Language (XML)XML was introduced as a format that could be used to describe not just documents, but is flexible and extensible enough to encapsulate an unlimited number of structured data formats.Many systems today use XML to request or transmit data to other systems on the internet. For instance the Google Checkout service provides an XML API for use by websites wishing to integrate with the payment processing service.Two of the most well known uses of XML are podcasts and RSS feed. Podcasts feeds are files in a type of XML format which describe information on audio files which can be downloaded and listened to. Podcast subscription software, such as iTunes, is used to track new audio files which are made available via the podcast feeds. When a new audio file becomes available, the software downloads the audio file and typically adds it to the portable audio player device (such as an iPod). Much like podcasts, RSS feeds provide a similar purpose, but instead of providing information on audio files, they instead provide data on documents, such as blog posts, which are provided by a particular website. People who like to keep themselves updated on various topics will subscribe to several sites using a RSS feed reader program. The reader program allows them to scan the titles and excerpts of all the pages/posts which have been made available recently, and choose which ones they wish to link to to read the page/post in its entirety. Google provides a free RSS reader available online by Google known as Google Reader.Cascading Style Sheets (CSS)CSS is used to define the how HTML elements are displayed on a websites pages. This applies not only to the standard HTML elements defined by the language, but can also be specified for element which contain certain id fields (which represent a single specific element), or elements which are assigned to a certain class (for styles which apply to multiple elements).For instance, all the paragraphs displayed on a page will be wrapped in a paragraph tag like so:<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc et ante urna. Morbi facilisis interdum libero, id interdum ligula vestibulum eget. Fusce ipsum libero, viverra eu rutrum ut, ultricies nec libero. Mauris sit amet dolor nec lorem congue dapibus eget id felis. Morbi sit amet orci tellus. Aenean aliquam laoreet metus, ac feugiat erat eleifend at.</p>CSS coding such as this would be used to define the font type, size, and weight (normal or bold) of the text:<style type="text/css"> p { font-family:Arial,Helvetica,sans-serif; font-weight: normal; font-size: 12px; }</style>Just the same another paragraph might require some sort of special style that is unlike all the other paragraphs throughout the website. For instance a paragraph that displays in a sidebar with a dark background.<p id="sidebar-thanks"> Thanks for visiting my website. Feel free to check out these links to my friends websites displayed below.</p>As you can see this paragraph has an id set to 'sidebar-thanks'. The following CSS coding could be used to ensure that the text is smaller, and that it is white so that it stands out on the dark background of the sidebar it's being displayed inside of.<style type="text/css"> p#sidebar-thanks { font-size:9px; color:#FFFFFF; }</style>This paragraph will still be of normal weight, and also be Arial, Helvetica, or san-serif, but will instead be 9 pixels in size, and white instead of the default black.Learn more about CSS at W3Schools.JavascriptJavascript is a client-side scripting language which is included with web pages and is executed only by the web browser that is displaying the page. Javascript is typically used to make modifications to elements displayed on a web page in realtime, instead of requiring the user to submit data to the web server, and wait for the page to reload with a different state from the web server.A common use of Javascript includes providing a popup alert to the user when they attempt to submit an incomplete web form, thus stopping the form from being submitted to the server until all necessary information is present. A method known as Asynchronous Javascript (AJAX) is a newer method of Javascript programming which allows the browser to connect to a script running on the web server (using a server side language such as PHP or Ruby), request and receive certain information, and then present the information inside of the web page without requiring the webpage to reload. One of the most popular demonstrations of this is the Google Maps interface which loads new map images as you click and drag the map to view a new geographic location.jQueryPerforming certain tasks using Javascript can be very tedious in that the common web browsers in use by the public, mentioned above, do not support certain Javascript commands in the same manner. This is especially the case regarding Internet Explorer in comparison to Firefox and Chrome (yes, I dislike Microsoft too). Many Javascript developers in the past have used complex sets of commands to detect the type of web browser being used, and thus execute the proper command that works for that web browser.To avoid this complication altogether there have been libraries/frameworks introduced which provide cross-browser compatible functions so you don't have to worry about which browser your coding will or will not work in. The first Javascript library introduced that gained widespread use was Prototype, which was released in February 2005 to provide the foundation for AJAX support in Ruby on Rails. A library known as Script.aculo.us was released in June of 2005, built upon Prototype to provide easy implementation of visual effects and user interface elements.More recently another library known as jQuery, released in January 2006, has taken the lead as the most popular Javascript frameworks, with jQueryUI being a library which extends jQuery to provide visual effects, animations, and user interface elements, while also supporting various themes which can be easily produced and applied to the webpage interface.RubyRuby is a programming language which was introduced in the mid-1990's by Japanese programmer Yukihiro Matsumoto, also known as 'Matz'. The Ruby language was designed for programmer productivity and fun, stressing that systems need to emphasize human rather than computer needs.Ruby is an object-oriented programming language, which simply means that the language places an emphasis on software elements which are treated as objects which have certain properties, and perform certain actions. Every variable inside of a Ruby program is an object which is of a certain class type, and has several methods which can be performed on it's value. Ruby objects are also reflective, which means that they can reveal information about themselves, such as their class.Many programming languages, such as C or Java, require a program known as a compiler which converts the source code you've written into a program file or files which can then be executed and run on the computer. Ruby is a scripting language that is run by an interpreter program in realtime, similar to other languages such as PHP or Perl, and thus it doesn't require compiling. You simply run the command for ruby to execute the script and it does what the script is programmed to do.For example, let say you install Ruby, then put the following code into a file called 'hello.rb': puts "Hello World"If you then opened the Terminal, changed to the directory that contains the file, and then ran the command 'ruby hello.rb', you would see something like this:$ ruby hello.rbHello World!$ YAMLSimilar to XML, YAML is simply just another standard for storing various types of structured data. YAML uses a syntax which results in a very clean and easy to read format, utilizing mostly tab characters and colons to describe the structure of the data.Ruby on Rails often uses YAML as the format for configuration files.RailsRuby on Rails is a web application framework, which means that it provides a structure which includes directories, configuration files, and many software libraries, which support the development of website applications. Obviously the name "Ruby on Rails" is meant to communicate the metaphor of using Ruby on a track, much like a train track, which gets you where you want to go smoothly and quickly.A developer which is familiar with the Ruby on Rails framework will be able to easily aquaint themselves with other Ruby on Rails applications they haven't worked on before, as all the various types of files that make up the coding for the application are stored in the same directories, and use the same programming structure. One of the primary philosophies of Ruby on Rails is "Convention over Configuration", which means that a developer only needs to specify unconventional aspects of the application...everything else which is conventionally provided by a website is provided by the Rails system.Ruby on Rails is designed to allow most coding by a web developer to be implemented using the Ruby programming language. For instance all interactions with the database server are handled by Ruby scripts which utilize a Ruby library known as ActiveRecord which converts Ruby commands into the SQL language commands which are sent to a database server such as MySQL, PostgreSQL, SQLite3, Oracle, etc. One benefit of the use of such a library is that a Ruby on Rails system can switch to a different type of database server should the need arise at some point in the future, without a lengthy migration process handled by developers and database administrator staff.ConclusionIf you're looking to dive into web development I recommend that you start with HTML and CSS first. Check out the links to the W3School tutorials provided above. If these aren't working for you, consider buying a book about HTML or CSS.You won't need Javascript, but it's recommended that you learn the basics as soon as you can so that you can use it once the need arises. Your understanding of HTML and CSS will help you become familiar with the structure of the webpage, and how to select certain HTML elements by ID or class. Javascript will help to acquaint you to what is known as the Document Object Model (DOM), which is needed to know which parts of the webpage and browser your Javascript coding can manipulate. Once you understand what options Javascript provides, you can then start to learn jQuery, which will prove to be much easier with less coding. There are many plugins available which rely on jQuery to provide very amazing solutions. For instance if you need to provide a dynamic photo gallery on a website, Gallerific is a very powerful and flexible library which can be used with jQuery to implement a very elegant image gallery solution.You won't need to understand anything more than the basics of XML or YAML to begin working with Ruby on Rails. If you want, you can also learn Ruby before you learn the Rails framework, so that you know the difference between coding which involves Rails libraries, and which are standard Ruby commands, but this isn't completely necessary.[previous][parent][next]"
} ,
{
"title" : "Project Planning",
"category" : "",
"tags" : "",
"url" : "/resources/course/project-planning/",
"date" : "2011-10-15 22:18:34 -0700",
"content" : "Before we jump into developing an application, we want to plan out what it will do. Imagine that you're working with a client that wants you to build them a website. Exactly what they ultimately want and need might not be immediately available, but a basic idea of what they definitely need from the system now is a start. You can sit down with a pen and paper with the client and kind of diagram everything, but for the purpose of this article we're going to present our diagrams in a digital format created with Gliffy.Example: Social Networking SiteFor our example, we'll imagine that we have a client that wants to build a social networking site, similar to Myspace or Facebook, with profiles for each user which include a photo and information about the user, and the ability to these profiles to associate with each other as contacts of some sort (kind of like how Facebook has 'friends').We'll imagine that the client is wanting this application to be for some specific purpose, like a special community of people, but they haven't worked out all the details in their own mind. They just know so far that they want profiles for each user, with a photo, they want associations between users, and they also want the ability for users to send private messages to each other. The remaining details will come later.Object Oriented CodingRuby is an object oriented language, which means that the resources that you create will have both properties/attributes, and will have certain actions that they perform. For instance you can think of a 'cat' as a type of object which always has two eyes, one nose, one mouth, and one tail (unless the cat has been abused). These are constants which would be hardcoded into what is known as a 'class'. A class is used to create an instance of each individual object, like the blueprint (or DNA) for each individual object. A class might define that each object also has certain properties which change (i.e. are variable), such as the color of the cats hair, or how much it weighs. Actions it might perform include 'meow' or 'walk' or perhaps even 'poop'.This is a typical explanation of classes and objects in an object oriented language, but this example doesn't seem to suggest how object oriented coding applies to real life. Most of the time the actions which an object performs, formally known as 'methods', deal with the properties of the object itself. A real life example that applies with our example project is the 'user' class which would define that each user has a first and last name, stored separately in two properties of the object. There might be a method called 'full_name' which returns both the first and last name together, with a space between the first and last name.If you've done object oriented programming, it may seem silly to input information into the computer and have it store the information in an object temporarily, and then that information is lost as soon as the program is done executing. The point of object oriented programming doesn't make sense when the state of the objects are lost. This is where databases become important. The state of a specific object, such as a 'user', can be loaded from a database record for that user, the user can be modified using the methods of the object which are defined by the class, and then the new state of the object can be saved to the database. This is how Ruby on Rails works. The classes you create for the resources you're working with are called 'models', and the properties of each object are stored in the database once you're done using the methods of that model to modify the resource in some way.DiagramHere is a simple diagram for the first version of our project. It will have users which have a single photos. Users will have many contacts, and users will also have many messages sent to contacts. [previous][next]"
} ,
{
"title" : "Creating Rails Project",
"category" : "",
"tags" : "",
"url" : "/resources/course/creating-rails-project/",
"date" : "2011-10-15 23:27:34 -0700",
"content" : "The first step in creating your Rails project is to open the command line.The 'rails' command is used to generate a new Ruby on Rails application. It's also used for many other functions when you are inside of the root directory of the application that has been generated.If you run the command 'rails --help', you will be shown the options which you can use with the 'rails' command.$ rails --helpUsage: rails new APP_PATH [options]Options: [--old-style-hash] # Force using old style hash (:foo => 'bar') on Ruby >= 1.9 [--skip-gemfile] # Don't create a Gemfile -d, [--database=DATABASE] # Preconfigure for selected database (options: mysql/oracle/postgresql/sqlite3/frontbase/ibm_db/sqlserver/jdbcmysql/jdbcsqlite3/jdbcpostgresql/jdbc) # Default: sqlite3 -O, [--skip-active-record] # Skip Active Record files [--skip-bundle] # Don't run bundle install -T, [--skip-test-unit] # Skip Test::Unit files -r, [--ruby=PATH] # Path to the Ruby binary of your choice # Default: /opt/local/bin/ruby -S, [--skip-sprockets] # Skip Sprockets files [--dev] # Setup the application with Gemfile pointing to your Rails checkout -j, [--javascript=JAVASCRIPT] # Preconfigure for selected JavaScript library # Default: jquery -J, [--skip-javascript] # Skip JavaScript files -m, [--template=TEMPLATE] # Path to an application template (can be a filesystem path or URL) [--edge] # Setup the application with Gemfile pointing to Rails repository -G, [--skip-git] # Skip Git ignores and keeps -b, [--builder=BUILDER] # Path to a application builder (can be a filesystem path or URL)Runtime options: -q, [--quiet] # Supress status output -s, [--skip] # Skip files that already exist -f, [--force] # Overwrite files that already exist -p, [--pretend] # Run but do not make any changesRails options: -v, [--version] # Show Rails version number and quit -h, [--help] # Show this help message and quitDescription: The 'rails new' command creates a new Rails application with a default directory structure and configuration at the path you specify.Example: rails new ~/Code/Ruby/weblog This generates a skeletal Rails installation in ~/Code/Ruby/weblog. See the README in the newly created application to get going.You'll see that there is a section which outlines the database options you can specify. -d, [--database=DATABASE] # Preconfigure for selected database (options: mysql/oracle/postgresql/sqlite3/frontbase/ibm_db/sqlserver/jdbcmysql/jdbcsqlite3/jdbcpostgresql/jdbc)When creating a new Rails application you can create it so that it's ready to be configured with various database servers. For the purpose of our tutorial, we will be using the MySQL server. We will create our application with the name 'snetwork', configured for a MySQL database, using the following command:rails new snetwork -d mysql[previous][next]"
} ,
{
"title" : "HTTP Port and Request Methods",
"category" : "",
"tags" : "",
"url" : "/resources/course/http-port-request-methods/",
"date" : "2011-10-13 10:20:16 -0700",
"content" : "HTTP PortsA web browser by default will make a request to an HTTP server on port 80, the standard HTTP server port, however it's possible for web servers to use a different port in special circumstances. To specify an alternative port in the address for the request, simply add a colon and the number after the domain. For instance many hosting companies use a system known as cPanel to manage their hosting accounts on a web server. Since the web server runs on port 80, the cPanel administrative interface runs on port 2082 like so: http://www.redconfetti.com:2082/Standard GET and POST RequestsThe most common HTTP method used to make a request of a web server is the 'GET' method, which simply tells the server to provide the contents of a specific resource based on the URL. Upon receiving the request, the server sends back a status line, such as "HTTP/1.1 200 OK" (indicating a successful request), along with the body of the response which is either the HTML webpage being requested, or an error message.For example, here is the GET request sent to a server after the browser has been told to go to http://www.example.com/index.htmlGET /index.html HTTP/1.1Host: www.example.comThe server might then respond with these headers:HTTP/1.1 200 OKDate: Mon, 23 May 2005 22:38:34 GMTServer: Apache/1.3.3.7 (Unix) (Red-Hat/Linux)Last-Modified: Wed, 08 Jan 2003 23:11:55 GMTEtag: "3f80f-1b6-3e1cb03b"Accept-Ranges: bytesContent-Length: 438Connection: closeContent-Type: text/html; charset=UTF-8...and thereafter include the 438 bytes of HTML coding for the index web page in the body of the response.The contents of an HTTP request are not seen in the web browser when requesting a web page, and just the same the 'header' information of the HTTP response provided by the server are not seen in the web browser either. It is only the body of the response which is either rendered in the browser window (such as an HTML web page), or displayed directly (such as images).Sometimes a GET request includes additional information used by the resource. For instance a webpage address such as http://www.redconfetti.com/index.php?page=about&style=black would result in a request such as:GET /index.php?page=about&style=black HTTP/1.1Host: www.example.comThis type of request allows you to see the information being provided to the server in the URL for the page/resource. This method is recommended for webpages which provide different information based on the additional parameters included in the request, but not for scripts/pages which create, update, or delete resources on the server as the URL can be bookmarked and re-used, leading to accidental and unwanted modifications.The second most common HTTP method used with requests to web servers is the 'POST' method. A POST request is the type typically used when a web form is submitted to the server. A web form can either submit information via GET (like shown above) or via a POST request. A POST request includes information in a certain format in addition to the path of the resource.This type of request cannot be accidentally re-submitted to the server, and in fact your browser will specifically ask you if you wish to re-submit the request when you choose to 'Refresh' a page that was loaded via a POST request.A POST request for an email form on a website may look like this:POST /contact/send-email.php HTTP/1.1Host: www.example.comContent-Type: application/x-www-form-urlencodedContent-Length: 46name=John%20Smith&email=john.smith@mailinator.com&message=Hello,How%20are%20you!Other HTTP MethodsHTTP defines nine methods/request types indicating the desired action to be performed on the resource (webpage, image, or other file) being requested. These methods are: HEAD, GET, POST, PUT, DELETE, TRACE, OPTIONS, CONNECT, and PATCH.Most people think of HTTP as a simple read-only medium where a browser makes a request and receives information, and the only method available to publish or remove content from a website is via FTP. Tim Berners-Lee, the inventor of the World Wide Web, originally designed the first browser called "WorldWideWeb" as a read/write browser; meaning you could not only browse and read content, but create and edit content too. The majority of websites today only operate through GET requests for the majority of pages, and sometimes use POST for web form submissions, but do not utilize the other HTTP requests types. These other request methods were intended to allow web servers to act like full services which not only provide options to 'GET' resources, but also to create new resources using the 'PUT' request type, or delete resources using the 'DELETE' request type.The PUT method is similar to POST, as it provides data with the request separate of the path of the resource being requested. The DELETE method is much like GET, but is intended for requests to delete certain resources.More detail on the HTTP methods is available W3.org Definitions page.[previous][next]"
} ,
{
"title" : "Tracking Changes using Git",
"category" : "",
"tags" : "",
"url" : "/resources/course/tracking-changes-using-git/",
"date" : "2011-10-29 21:11:53 -0700",
"content" : "Git is a free distributed revision control system. A revision control system allows developers to submit additions or modifications they make to a software project to a central database known as a repository. Each change made to the source code for the project is known as a commit. With each commit a developer will include a comment which describes what the change is.Unlike other popular revision control systems, such as Subversion, Git allows the developer to make multiple commits to a local repository hosted from their local computer. Once they are ready to share their changes with other developers, they will push the one or many commits they've made locally to the remotely hosted repository. When a developer begins a session of modifying a project, they'll usually run a command to 'pull' all of the latest changes made by other developers to their own local repository. This ensures that changes made by other developers do not conflict with the changes they are about to make.Setting up an SSH Key on your Local MachineChecking for Existing KeysFirst check to see if you have SSH keys setup on your machine.$ cd ~/.ssh-bash: cd: /Users/johnsmith/.ssh: No such file or directoryIf the directory does exist, run 'ls' to see if there is an id_rsa (or id_dsa) and id_rsa.pub (or id_dsa.pub) file present. If so, you already have an SSH key pair setup to with the Git repository.If it is indicated that the directory doesn't exist, you do not have SSH keys setup for your user account on your computer yet. Proceed with these instructions to generate SSH keys below.Generating SSH KeysRun the following command to generate the SSH keys.ssh-keygen -t rsa -C "your_email@youremail.com"When asked for a passphrase, you can provide one, but it's not required. Using a passphrase with your SSH key only makes authentication more secure, and makes it so you have to provide the passphrase each time you connect to a remote server using the key. Other options include the use of a mechanism which provides the password for you when using the SSH keys, such as the keychain provided by Mac OS X.Setting up a Remote RepositoryFor the purposes of this article I will assume that you are using a server running the Ubuntu operating system, a popular Linux distribution used for both desktop and server machines. I highly recommend using a Linode VPS with Ubuntu installed.We'll be using a program known as Gitosis to setup the Git repository that you'll be hosting from the server to keep track of changes made to your application. The first step in accomplishing this is to install Git on your Ubuntu server. This is done using the apt-get package manager which is available from the Ubuntu command line.sudo apt-get install git-coreNext we'll need to download Gitosis, the program which is going to aid in the setup and management of the repositories hosted from the server.cd ~git clone git://eagain.net/gitosis.gitThis will place a folder named 'gitosis' inside of your home directory. Now move this directory to the /usr/local directory where software not provided by apt-get is typically installed.sudo mv gitosis/ /usr/localNext go into the /usr/local/gitosis folder, install Python using apt-get, and then run the setup.py script as a super user (using sudo) inside of the folder to complete the installation.cd /usr/local/gitosissudo apt-get install python-setuptoolssudo python setup.py installNow that Gitosis is installed, the next step is to create a user account on the Ubuntu server which will host the repository for the project, as well as other repositories you choose to add in the future. Run the following command to add the 'git' user which will exist on the server without a password (so no one can login as the 'git' user), with it's home directory specified as /var/git (a proper place for the repositories to be hosted from the system).sudo adduser -system -shell /bin/sh -gecos 'git version control' -group -disabled-password -home /var/git git"
} ,
{
"title" : "Web Services",
"category" : "",
"tags" : "",
"url" : "/resources/course/web-services/",
"date" : "2011-10-12 22:32:43 -0700",
"content" : "Web ServicesWeb services are software systems designed to allow for communication between two separate systems on a network, or more specifically in this case - the Internet. Often a website will refer to it's web services as an Application Programming Interface (API), as the web service isn't strictly intended to be used by other websites, but also with desktop applications. For instance Twitter's developers website refers to it's programmer integration options as an API.The dominant protocol adopted earlier for web services was SOAP, however most web applications developed with "Web 2.0" standards in mind are adopting RESTful interfaces which correlate with the HTTP methods discussed above.RESTful Web ServicesThe World Wide Web, and thus HTTP, was originally designed to provide a full suite of methods for the creation, reading, updating, and destroying (also known as CRUD) of resources on the web. As such newer website applications are adopting the use of web services/APIs which use HTTP and principles of a software architecture style known as Representational state transfer (REST), known as a RESTful Web API.A RESTful Web API simply provides an interface for managing resources provided by a web application using the GET, POST, PUT, and DELETE methods, while supporting various data types for communication such as XML, JavaScript Object Notation (JSON), or YAML.When you are creating a new set of resource management options in a Ruby on Rails application, it's possible to define the requests which are possible with a certain resource as a RESTful resource that can be managed via a typical web browser interface, or as a RESTful Web API, thus killing two birds with one stone. More on this will be covered in detail in the article on Rails Routing.[previous][next]"
} ,
{
"title" : "Search",
"category" : "",
"tags" : "",
"url" : "/search/",
"date" : "",
"content" : " "
} ,
{
} ,
{
"title" : "Miscellaneous",
"category" : "",
"tags" : "",
"url" : "/resources/notes/misc/",
"date" : "",
"content" : " Regular ExpressionsAPIs GraphQL Apollo client Video Streaming Services AddPipeJavascript Frameworks VueJS ComparisonBoilerplate StarHackIt"
} ,
{
"title" : "Blender Notes",
"category" : "",
"tags" : "",
"url" : "/resources/notes/blender/",
"date" : "",
"content" : "Back to NotesThis are notes for Blender, the free and open source 3D creation suite.These notes are based on the Blender 2.8 Fundamentals course, createdusing Blender 2.93 for Mac OS. Viewport Navigation Interface Overview Cheat SheetsAs an alternative to my incomplete notes above, I recommend that you browse oneof the following versions of the latest official Blender manuals. Online Manual HTML Manual (download) Epub Manual (download)"
} ,
{
"title" : "VueJS",
"category" : "",
"tags" : "",
"url" : "/resources/notes/vue/",
"date" : "",
"content" : "Back to Cheat Sheets Intro to Vue.js"
} ,
{
"title" : "Node.js",
"category" : "",
"tags" : "",
"url" : "/resources/notes/node/",
"date" : "",
"content" : "IntroAllows you to build scalable network applications using JavaScript on theserver-side. Uses V8 JavaScript Runtime, which powers Chrome / Chromium. NodeJS isa wrapper for this engine.What can you build with NodeJS Websocket Server (e.g. Chat Room) Fast File Upload client Ad Server Any Real-Time Data AppsNodeJS is not a framework, it is very low level, and it is not multi-threaded. Itis a single threaded server.Blocking vs Non-Blocking CodeWith blocking code the previous command must be completed before the next commandcan run. Non-blocking code is structured so that it uses callbacks to determine thenext step after something is completed. This allows the code to continue running ifone item is not yet complete.NodeJS is single threaded, however it’s able to run JavaScript Asynchronously. Itis build upon libuv, a cross-platform library that abstracts apis/syscalls forasynchronous (non-blocking) input/output provided by the supported OSes (Unix, OS Xand Windows).In this model, open/read/write operation on devices and resources (sockets, filesystem, etc) managed by the file-system do not block the calling thread like theydo with C programs. They mark the process to be notified when new data or eventsare available.Node uses an event loop to allow this, which invokes the next callback/functionthat was scheduled for execution. “Everything runs in parallel except your code”,meaning that node allows your code to handle requests from hundreds and thousandsof open sockets with a single thread concurrently by multiplexing and sequencingall your js logic in a single stream of execution.var http = require("http")http .createServer(function(request, response) { response.writeHead(200) // status code in header response.write("Hello, this is dog.") // response body response.end() // close the connection }) .listen(8080) // listen for connections on this portThis code uses an event loop that checks for an HTTP request until one isencountered.Why JavaScript “Javascript has certain characteristics that make it very different than otherdynamic languages, namely that it has no concept of threads. Its model ofconcurrency is completely based around events.” - Ryan DahlKnown Events: request connection closeWhen these events are encountered, the associated callback functions are called. You could consider these callback functions our Event Queue.var http = require("http")http .createServer(function(request, response) { response.writeHead(200) // status code in header response.write("Hello, this is dog.") // response body setTimeout(function() { // represents long running process response.write("Dog is done.") response.end() }, 5000) // 5000ms = 5 seconds }) .listen(8080) // listen for connections on this portBlocking Call to the File Systemvar fs = require("fs")var contents = fs.readFileSync("index.html")console.log(contents)Non-Blocking Call to the File Systemvar fs = require("fs")fs.readFile("index.html", function(error, contents) { console.log(contents)})Read File and Serve as HTMLvar http = require("http")var fs = require("fs")http .createServer(function(request, response) { response.writeHead(200, { "Content-Type": "text/html" }) fs.readFile("index.html", function(error, contents) { response.write(contents) response.end() }) }) .listen(8080)EventsThe DOM triggers events such as click, hover, or submit. You can register callbacksvia jQuery when these events occur.Many objects in NodeJS emit events. The net.Server object inherits from theEventEmitter class, and emits the ‘request’ event. fs.readStream also inherits fromEventEmitter, and emits the ‘data’ event as data is read from the file.Custom Event EmittersWe can register our own emitter to do something like log errors, warnings, or infoevents.var EventEmitter = require("events").EventEmittervar logger = new EventEmitter()logger.on("error", function(message) { console.log("ERR: " + message)})logger.emit("error", "Spilled Milk")logger.emit("error", "Eggs Cracked")// Chat Emittervar events = require("events")var EventEmitter = events.EventEmittervar chat = new EventEmitter()chat.on("message", function(message) { console.log(message)})// emit call to 'message' callbackchat.emit("message", "hello, how are you doing")Multiple EventsIt is possible to register multiple callbacks for a single request emitter. Whenthe request is received, the response will be sent to the requestor, and theconsole log will occur.var http = require("http")var server = http.createServer()server.on("request", function(request, response) { response.writeHead(200) response.write("Hello, this is dog") response.end()})server.on("request", function(request, response) { console.log("New request coming in...")})server.listen(8080)# terminal 1$ node server.jsNew request coming in...# terminal 2$ curl http://localhost:8080/Hello, this is dogHTTP Echo ServerIn the last lesson, we registered an event for http server to respond to requests,using the request event callback.http.createServer returns a new web server object. It allows you to pass arequestListener function, which it uses to respond to requests.http.createServer(function(request, response) { ... });The ‘request’ event makes a call to this function, passing the two parameters intothe function.Alternative SyntaxThis alternative syntax is typically how you add event listeners in Node.// create server with no parametersvar server = http.createServer();// register request event callbackserver.on('request', function(request, response) { … });// register event callback when server is closedserver.on('close', function() { … });StreamsFor efficiency, we need to be able to access data chunk-by-chunk, piece-by-piece,and sending the data as it receives each chunk. By processing and sending eachchunk, less memory is used.Streams can be readable, writeable, or both. The API described here is for streamsin Node version 0.10.x (the streams2 API).With our server example, the request is a readable stream, and the response is awritable stream.http .createServer(function(request, response) { response.writeHead(200) response.write("<p>Dog is running.</p>") setTimeout(function() { response.write("<p>Dog is done.</p>") response.end() }, 5000) }) .listen(8080)The browser immediately sends the response with the first write to the responsestream. 5 seconds later we write the “Dog is done” string to the response, thenclose the response object, ending the stream.How might we read from the request? The request object is a readable stream, whichalso inherits from EventEmitter. It can communicate with other objects throughevents, such as ‘readable’ which is fired when the object is ready to consume data,and the ‘end’ event which is fired when it is done.http .createServer(function(request, response) { response.writeHead(200) request.on("readable", function() { var chunk = null while (null !== (chunk = request.read())) { console.log(chunk.toString()) } }) request.on("end", function() { response.end() }) }) .listen(8080)We have to use toString() to convert the chunk, because itprovides a buffer where binary data might be present.In this case we’re logging the data we receive from the client tothe console. Instead of doing this we can provide the same databack to the client that we’ve received.http .createServer(function(request, response) { response.writeHead(200) request.on("readable", function() { var chunk = null while (null !== (chunk = request.read())) { response.write(chunk) } }) request.on("end", function() { response.end() }) }) .listen(8080)In this scenario, response.write converts the chunk to a string for us. When we want to redirect the request stream back to the response stream we can instead use request.pipe(response). This same code above can be refactored with the following:http .createServer(function(request, response) { response.writeHead(200) request.pipe(response) }) .listen(8080)$ curl -d 'hello' http://localhost:8080HelloThis is similar to the pipe operator used on the Bash command line, used to streamthe output from one operation into the next one.When you can, use pipe instead of listening to the readable event and manuallyreading the chunks. This helps protect your application from future breakingchanges to the streams API provided by Node, which may change in the future giventhat NodeJS is still very young (v0.10.0). In the NodeJS documentation, it isreported how stable the API is, meaning how likely it is to change and thus breakyour existing functionality that depends on the API.For instance, the File System has a Stability rating of 3 - Stable, with the StreamAPI rated as 2 - Unstable.Reading and Writing a Filevar fs = require("fs")var file = fs.createReadStream("readme.md")var newFile = fs.createWriteStream("readme_copy.md")file.pipe(newFile)Streaming is so powerful, so simple to use with the pipe function, that there are3rd party libraries that depend on it. For instance the Gulp.js build system, whichexposes the pipe function as its public API so you can do many sorts ofmanipulations on its assets with very few lines of code.var fs = require("fs")var http = require("http")http .createServer(function(request, response) { var newFile = fs.createWriteStream("readme_copy.md") request.pipe(newFile) request.on("end", function() { response.end("uploaded!") }) }) .listen(8080)curl --upload-file readme.md http://localhost:8080We can pipe any read stream into any write stream. In this example we can read froma request instead of from a file. We listen to the ‘end’ event for the request sothat we can close the response stream once completed.We are streaming pieces of the file from the client to the server, then the serveris streaming those pieces to the disk as they are being read from the request.Because Node is non-blocking, if we try to upload two files to the same server,Node can handle them simultaneously.Ryan Dahl created NodeJS was to deal with the issue of file uploads. Many webapplications try to receive the entire file into memory before writing it to thedisk, which can cause issues on the server side, which can also block other usersof the same web application.It’s also not possible to provide file upload progress to the user as it’s beinguploaded.File Uploading Progress$ curl --upload-file file.jpg http://localhost:8080progress: 3%progress: 6%...progress: 99%progress: 100%var fs = require("fs")var http = require("http")http .createServer(function(request, response) { var newFile = fs.createWriteStream("readme_copy.md") var fileBytes = request.headers["content-length"] var uploadedBytes = 0 request.on("readable", function() { var chunk = null while (null !== (chunk = request.read())) { uploadedBytes += chunk.length var progress = (uploadedBytes / fileBytes) * 100 response.write("progress: " + parseInt(progress, 10) + "%\n") } }) request.pipe(newFile) request.on("end", function() { response.end("uploaded!") }) }) .listen(8080)Because the stream of request contents to the newFile object (writeable file stream)is being established immediately after the request object has registered the‘readable’ function callback, the pipe is set up immediately to feed into the file.As soon as the request object is ready to read chunks of data from, it starts toprocess each chunk and provide the progress back to the client making the request.Output to Standard Outputvar fs = require("fs")var file = fs.createReadStream("fruits.txt")file.pipe(process.stdout)ModulesWe’ve loaded modules using ‘require’ in past lessons.var http = require("http")var fs = require("fs")How does ‘require’ return the libraries?How does it find these files?// custom_hello.jsvar hello = function() { console.log("hello!")}module.exports = hello// custom_goodbye.jsexports.goodbye = function() { console.log("bye!")}// app.jsvar hello = require('./custom_hello');Var gb = require('./custom_goodbye');hello();gb.goodbye();With our hello module, we’re only making a single method public by assigning it tomodule.exports. With the goodbye module we could assign more than just this singlefunction to the module.We can optionally require the module and then call the method directly.require("./custom_goodbye").goodbye()Export Multiple Functions// my_module.jsvar foo = function() { … }var bar = function() { … }var baz = function() { … }exports.foo = fooexports.bar = bar// app.jsvar myMod = require("./my_module")myMod.foo()myMod.bar()Because we did not export the ‘baz’ function, it is private, and only accessiblefrom within the module.Making HTTP Requests// app.jsvar http = require("http")var message = "Here's looking at you, kid."var options = { host: "localhost", port: 8080, path: "/", method: "POST"}var request = http.request(options, function(response) { response.on("data", function(data) { console.log(data) // logs response body })})request.write(message) // begins requestrequest.end() // finishes requestEncapsulating the FunctionWe can make this simpler by wrapping it in a function call.// make_request.jsvar http = require("http")var makeRequest = function(message) { var options = { host: "localhost", port: 8080, path: "/", method: "POST" } var request = http.request(options, function(response) { response.on("data", function(data) { console.log(data) // logs response body }) }) request.write(message) // begins request request.end() // finishes request}module.exports = makeRequest// app.jsvar makeRequest = require("./make_request")makeRequest("Here's looking at you, kid")makeRequest("Hello, this is dog")Node Package Manager (NPM)Where does require look for modules?var make_request = require("./make_request") // looks in same directoryvar make_request = require("../make_request") // looks in parent directory// looks at absolute pathvar make_request = require("/Users/eric/nodes/make_request")// looks for it inside 'node_modules' directoryvar make_request = require("make_request")Looks for node_modules directory: In the current directory In the parent directory In the parent’s parent directory Etc.Each directory inside of ‘node_modules’ is a package that represents a module.Packages come from NPM (Node Package Manager).NPM comes with node, there is a module repository, and it handles dependenciesautomatically, and makes it easy to public modules.Node Package ManagerInstalling a NPM ModuleInside of /home/my_app:npm install requestThis will install the ‘request’ package inside of /home/my_app/node_modules/request// - /home/my_app/app.js// loads from local node_modules directoryvar request = require("request")Local vs GlobalSometimes you may want to install packages globally, instead of only within aspecific application.npm install coffee-script -gThis package comes with an executable that we can use from the command line:coffee app.coffeeA globally installed NPM module cannot be required.// will not workvar coffee = require("coffee-script")We still have to install the coffee-script module locally for the application torequire it into our program.Finding ModulesYou can find libraries that are useful in the NPM registry website, in Github, or you can search from the command line:npm search requestDefining Your Dependencies// my_app/package.json{ "name": "My App", "version": "1", "dependencies": { "connect": "1.8.7" }}npm installWhen you get a node project, the node_modules folder won’t be present. You’ll haveto run ‘npm install’ to install the needed packages.Inside of my_app/node_modules/connect, you’ll notice that each package has it’s ownset of dependencies as well. NPM installs those dependencies as well.Semantic Versioning"connect": "1.8.7"Major version is 1, Minor is 8, Patch is 7.A major version change may completely change the API. A minor version is less likely, and a patch shouldn’t. “connect”: “~1” - Will fetch any version greater to or equal to 1.0.0, yet less than 2.0.0 “connect”: “~1.8” - Will fetch versions greater than 1.8.0, less than 1.9.0 “connect”: “~1.8.7” - Will fetch versions greater than 1.8.7, less than 1.9.0See SemVer.org for more informationExpressNode is very low level. You’ll want to build a web framework if you’re working on a very large web application. Or you can use an existing framework such as Express.“Sinatra inspired web development framework for Node.js – insanely fast, flexible, and simple” Easy route URLs to callbacks Middleware (from Connect) Environment based configuration Redirection helpers File Uploads# install module and add to package.json$ npm install --save expressvar express = require("express")var app = express()// configure root routeapp.get("/", function(request, response) { // serve file from current directory response.sendFile(__dirname + "/index.html")})app.listen(8080)Express RoutesWe want to create an endpoint that receives a certain twitter users name, obtains that users tweets from Twitter, and returns them.// app.jsvar request = require("request")var url = require("url")app.get("/tweets/:username", function(req, response) { var username = req.params.username options = { protocol: "http:", host: "api.twitter.com", pathname: "/1/statuses/user_timeline.json", query: { screen_name: username, count: 10 } } var twitterUrl = url.format(options) request(twitterUrl).pipe(response)})The Twitter API requires users to authenticate, so there will be more code requiredto do this.curl -s http://localhost:8080/tweets/eallam/npm install prettyjson -gcurl -s http://localhost:8080/tweets/eallam/ | prettyjsonExpress Templatesnpm install --save ejsmy_app/package.json"dependencies": { "express": "4.9.6", "ejs": "1.0.0"}my_app/app.jsapp.get('/tweets/:username', function(req, response) { … request(url, function(err, res, body) { var tweets = JSON.parse(body); response.locals = {tweets: tweets, name: username}; response.render('tweets.ejs'); }}my_app/views/tweets.ejs<h1>Tweets for @<%= name %></h1><ul> <% tweets.forEach(function(tweet) { %> <li><%= tweet.text %></li> <% }); %></ul>Socket IONode works very well for providing real time communication, which is perfect forrunning a chat server.Typically the HTTP request/response cycle involves a request, the browser waits fora response, and then the server responds. That is the end of the connection.With Websockets, we can transmit information back and forth at the same time. Thisis known as a full duplex connection. We cannot rely on every web browser tosupport WebSockets, so we have to use a library as a fallback for when the browserdoes not support socket connections.npm install --save socket.io// app.jsvar express = require("express")var app = express()var server = require("http").createServer(app)var io = require("socket.io")(server)io.on("connection", function(client) { console.log("Client connected…")})app.get("/", function(req, res) { res.sendFile(__dirname + "/index.html")})server.listen(8080)Here we are passing the server object to the socket.io library so that it can usethe server to listen for requests. We are registering a connection event withlogger, and then also configuring a path for the request to be received by.Socket.io for Websockets<script src="/socket.io/socket.io.js"></script><script> var socket = io.connect("http://localhost:8080")</script>Sending Message to Client// app.jsio.on("connection", function(client) { console.log("Client connected…") // emit the message event on the client client.emit("messages", { hello: "world" })})<!-- index.html --><script src="/socket.io/socket.io.js"></script><script> var socket = io.connect("http://localhost:8080") socket.on("messages", function(data) { alert(data.hello) })</script>$ node app.js Info - socket.io startedhttp://localhost:8080/Alert pops up with the hello in the browser, and the console logs that the clientconnected.Sending Messages to Server// app.jsio.on("connection", function(client) { client.on("messages", function(data) { console.log(data) })})<!-- index.html --><script> var socket = io.connect("http://localhost:8080") $("#chat_form").submit(function(e) { var message = $("#chat_input").val() socket.emit("messages", message) })</script>$ node app.js Info - socket.io startedBroadcasting MessagesWe’re trying to create a chat room, not simply send and receive messages. Luckily there is a broadcast method supported by Socket.iosocket.broadcast.emit("message", "Hello")This will send the message to all the other connected sockets (chat room clients).// app.jsio.on("connection", function(client) { client.on("messages", function(data) { client.broadcast.emit("messages", data) })})<!-- index.html --><script> var socket = io.connect("http://localhost:8080") socket.on("messages", function(data) { insertMessage(data) })</script>Saving Data On The SocketWe don’t know who is who on this server, so we need to make some possible way ofregistering usernames.io.on("connection", function(client) { client.on("join", function(name) { client.nickname = name // set the nickname associated with this client })})We’ll prompt the user for their nickname when they connect, and then we’ll emitthat to the server via the ‘join’ event.<script> var server = io.connect("http://localhost:8080") server.on("connect", function(data) { $("#status").html("Connected to chattr") nickname = prompt("What is your nickname?") server.emit("join", nickname) })</script>This ensures that the name is available both to the server, and to the client. Nowwe need to make the messages listener so that before we broadcast the message weget the nickname of the client and use that when emitting the message.// app.jsio.on("connection", function(client) { client.on("join", function(name) { client.nickname = name }) client.on("messages", function(data) { var nickname = client.nickname client.broadcast.emit("message", nickname + ": " + message) // broadcasts the name and message client.emit("messages", nickname + ": " + message) // sends the same message back to our own client })})"
} ,
{
"title" : "Notes",
"category" : "",
"tags" : "",
"url" : "/resources/notes/",
"date" : "",
"content" : "Misc BlenderJavaScript Javascript Reference ECMAScript 2015 NodeJs Javascript Fundamental for ES6React Powering Up with React Build a YouTube Clone Application Using React React Hooks - Most Used FeaturesRedux How Redux WorksVue Intro to Vue.js"
} ,
{
"title" : "Cheat Sheets",
"category" : "",
"tags" : "",
"url" : "/resources/cheat-sheets/",
"date" : "",
"content" : "Back to ResourcesHere are my own personal cheat sheets. I’ve organized some into their own pages,or simply added a few commands below.Development C Programming Docker Git GNU/Linux MySQL Node Package Manager (NPM) PGP Encryption / Decryption PostgreSQL Rails Rails Tests RubyGems RVM RBenv Vagrant VimOperating Systems MacOSX Raspberry Pi OS"
} ,
{
"title" : "Setting up a Rails Development Environment for Mac Users",
"category" : "",
"tags" : "",
"url" : "/resources/course/setting-up-rails-development-environment-for-mac-users/",
"date" : "2011-10-13 18:28:05 -0700",
"content" : "This article provides instructions for setting up a Ruby on Rails developmentenvironment using Mac OS X. Please see the next article for instructions onsetup for Windows or Linux.Mac OS X because it’s the most popular operating system within the Ruby on Railscommunity, with many tutorials, screencasts, and other resources devoted towardthose developing for Ruby on Rails using Mac OS X.Text EditorWhen you’re working on a Ruby on Rails application it’s best if you have a texteditor which provides Ruby code syntax highlighting / coloring. This makes iteasy to detect when you’re making a mistake, and makes for a pleasant codingexperience (don’t you like colors?!). Also important is a project folder viewerwhich displays the various files in each folder of the project.I highly recommend TextMate. At a price of $55, it’s well worth it as it makesyour code look beautiful and easy to edit. There are alternative programs suchas jEdit, or an entire development environment known as Aptana Radrails,however these are complicated and I don’t recommend them to someone who is justgetting into Ruby on Rails web development.If you’re not wanting to pay for Textmate, then I can recommend RedCar, whichruns on jRuby/Java and can be kind of slow, but closely resembles Textmate.If you’re working on an open source project, or willing to pay $15 a month, thenyou can use a web-based IDE tailored for Rails projects calledCloud9.Package ManagementTo work on a Ruby on Rails project, you’ll need to use the command line to runcertain commands. I also recommend that you setup your computer so that you caninstallation and use any Linux-based software or libraries which you may findyou need to have available at some point in the future. You may be aware thatWindows software is made to be installed and run for any version of Windows (XP,Vista, 7), and just the same Mac software will install and run on Mac computersrunning a recent Mac OS X version (such as Leopard or Lion).Software made forLinux is not so easy to install on any computer running a Linux operatingsystem. This is because Linux software is originally developed and provided assource code, and requires a very technical understanding of how to compile andinstall the software on the system. At the same time, there are not justversions of Linux (older and newer), but many various distributions that arebeing maintained and updated. Each distribution, which is put together anddistributed by an organization or sometimes by an individual, may storeprograms, libraries, and configuration files in entirely different directoriesinside the file system.A Linux distribution may also come with certain systems or utilities which aredesigned to work with that distributions configuration of programs, libraries,and configuration files. Because of this, there is not a one-size fits allinstaller available for Linux programs that will work with any distribution.This is why the most popular Linux distributions come with a program known as aPackage Manager, which downloads and installs software as packages which areintended for that specific distribution. For example there is a very popularLinux distribution known as Ubuntu, which comes witha package manager called Synaptic. When an Ubuntu user chooses to install theApache2 web server via Synaptic, a package is downloaded and installed which isspecifically designed to work with Ubuntu. For the purpose of this website, theinstructions below will help you to create a link to the command line/terminalfor your system, install a package manager, and then install the software neededvia that package manager. This way your version of Ruby, Apache web server, andother libraries, will all be associated with each other, and will make for asmoother experience without as many complications. ## Command Prompt for Mac OSX - Terminal The command line on a Mac is known as ‘Terminal’ and is availableunder Applications > Utilities, in the Finder window. Drag this program to yourDock on the bottom of the screen so you can open it quickly and easily in thefuture as needed. Even though Mac OS X comes packaged with Ruby, we recommendthat you install a package manager known asMacPorts and then establish all thesoftware installed by MacPorts as the ones used in your command lineenvironment.Installing RubyThe first step in getting started with Ruby on Rails is to install the Rubyinterpreter itself. Many programming languages, such as C or Java, require aprogram known as a compiler which converts the text coding of the language intoan executable program for use on the computer. Ruby is a scripting language thatis run by an interpreter program in realtime, like PHP or Perl, and thus itdoesn’t require compiling. You simply run the command for ruby to execute thescript and it does what the script is programmed to do.If you’re running Mac OS X, you don’t need to install Ruby because you alreadyhave it. It is packaged with your operating system.Installing Ruby GemsMost programming languages provide standard features such as variables,operators, conditional statements, loops, etc. In addition to these standardfeatures a core set of libraries are typically provided which provide functionsfor working with the file system, networking, different types of variables, etc.All these features and libraries are the building blocks of any computerprogram, however there are many common actions which you may want to performwith your program that you would need to program from scratch. Luckily there areprogramming libraries which are made for free to the public which providemethods for performing certain actions via your own programs. Typically yousimply add the library to a program folder in your program, and then use somesort of command to include that library either globally (available to the wholeprogram), or inside of a specific script where you intend to use the functionsof that library.Many programming languages have a package manager program available which aidsin the retrieval and installation of these libraries. For the Perl programminglanguage there is a package manager and repository of libraries made availablecalled Comprehensive Perl Archive Network (CPAN).For the PHP programming language there is a package manager and repository oflibraries made available through a system known asPHP Extension and Application Repository (PEAR).RubyGems is a package manager for theRuby programming language which provides program or libraries in packagescalled ‘Gems’. Ruby on Rails itself is a Gem, as are the many libraries whichthe Rails framework relies on (ActiveModel, ActiveResource, ActiveSupport, etc).After installing a gem, the Ruby interpreter has access to the gem by simplyincluding it. For instance a Ruby script that needs to use the functionscontained in the FasterCSV gem simply includes “require ‘faster_csv’” at thetop of the script.If you’re using Mac OS X Leopard or above, you already have Ruby and Ruby Gemsrunning on your computer.Installing RailsAll you need to do is open up the Terminal program from your Applications folderunder ‘Utilities’. We recommend that you drag the Terminal to your toolbar asyou’ll be using it often enough as you work on your Ruby on Rails applicationto make it easily accessible. From the command line of the Terminalprogram run the command ‘sudo gem update rails’. The lines displayed after thiscommand, after you’ve typed your password and pressed enter, should look similarto this:$ sudo gem update railsPassword:Updating installed gemsUpdating jquery-railsFetching: jquery-rails-1.0.16.gem (100%)Fetching: activesupport-3.1.1.gem (100%)Fetching: activemodel-3.1.1.gem (100%)Fetching: rack-cache-1.1.gem (100%)Fetching: actionpack-3.1.1.gem (100%)Successfully installed jquery-rails-1.0.16Successfully installed activesupport-3.1.1Successfully installed activemodel-3.1.1Successfully installed rack-cache-1.1Successfully installed actionpack-3.1.1Updating railsFetching: activerecord-3.1.1.gem (100%)Fetching: activeresource-3.1.1.gem (100%)Fetching: actionmailer-3.1.1.gem (100%)Fetching: railties-3.1.1.gem (100%)Fetching: rails-3.1.1.gem (100%)Successfully installed activerecord-3.1.1Successfully installed activeresource-3.1.1Successfully installed actionmailer-3.1.1Successfully installed railties-3.1.1Successfully installed rails-3.1.1Updating rails-footnotesFetching: rails-footnotes-3.7.5.gem (100%)Successfully installed rails-footnotes-3.7.5Gems updated: jquery-rails, activesupport, activemodel, rack-cache, actionpack, activerecord, activeresource, actionmailer, railties, rails, rails-footnotesInstalling ri documentation for jquery-rails-1.0.16...Installing ri documentation for activesupport-3.1.1...Installing ri documentation for activemodel-3.1.1...Installing ri documentation for rack-cache-1.1...Installing ri documentation for actionpack-3.1.1...Installing ri documentation for activerecord-3.1.1...Installing ri documentation for activeresource-3.1.1...Installing ri documentation for actionmailer-3.1.1...Installing ri documentation for railties-3.1.1...Installing ri documentation for rails-3.1.1...file 'lib' not foundInstalling ri documentation for rails-footnotes-3.7.5...Installing RDoc documentation for jquery-rails-1.0.16...Installing RDoc documentation for activesupport-3.1.1...Installing RDoc documentation for activemodel-3.1.1...Installing RDoc documentation for rack-cache-1.1...Installing RDoc documentation for actionpack-3.1.1...Installing RDoc documentation for activerecord-3.1.1...Installing RDoc documentation for activeresource-3.1.1...Installing RDoc documentation for actionmailer-3.1.1...Installing RDoc documentation for railties-3.1.1...Installing RDoc documentation for rails-3.1.1...file 'lib' not foundInstalling RDoc documentation for rails-footnotes-3.7.5...Note: You will not see stars or other characters as you enter your password.Simply type it and press the ‘Enter’ key. If you mistyped the password, tryagain until it works."
} ,
{
"title" : "Course Outline",
"category" : "",
"tags" : "",
"url" : "/resources/course/",
"date" : "2011-10-12 17:05:21 -0700",
"content" : "The following is a series of articles I worked on in mid 2011, meant to providea guide for someone new to web development that wished to start with Ruby onRails. As a whole the course is incomplete, but some of the articles still couldprovide value to those who stumble upon them. If you wish to use the articles tohelp yourself get started, feel free to use my contact page to let me know. I’mwilling to clean these up a bit, and finish the course with your help, if I knowthat you wish to use them.The course was not intended to provide instruction onHTML,CSS, orJavascript. I expect that you canpurchase books on these subjects, or even refer to W3Schools links I’ve justprovided to learn more about those languages. HTML and CSS are definitely neededas prerequisites to working with Ruby on Rails. HTML is easy enough to learn.CSS just takes a little bit of experience and time to pick up and work withfluently. I’m still learning Javascript, even though I have a working knowledgeof it, so it’s not a total requirement to jump into the world of webdevelopment.It might be a bit simpler to start people out with a server side language suchas PHP, but I’m interested in helping people dive right into a websitedevelopment framework (Rails) that is based on an even more flexible andbeautiful language known as Ruby.The course provided by this site provides an explanation of the concepts whichexperienced web developers are already aware of, and thus provide theprerequisite material needed for a person to jump into a book on Ruby on Railswithout being completely lost. In the case of this site, the articles areintended to serve as the prerequisite for theRails 3 In Action book by Ryan Bigg and YehudaKatz, which are available in print or ebook versions. Introduction Web Technologies Static and Dynamic Resources, and Rewrite Engines Setting up a Rails Rails Development Environment for Mac Users Setting Up a Rails Development Environment for PC Users Project Planning Creating Rails Project Web Hosting Capistrano Deployment for Apache2 with Passenger HTTP Port and Request Methods Web Services"
} ,
{
"title" : "Resources",
"category" : "",
"tags" : "",
"url" : "/resources/",
"date" : "",
"content" : " Cheat Sheets Links Notes"
} ,
{
"title" : "About",
"category" : "",
"tags" : "",
"url" : "/about/",
"date" : "",
"content" : "I am Jason Miller, a Ruby on Rails developer in the East Tennessee.Programming is a learning experience. You try to do something, hit a wall, trialand error, dig deep, triumph (usually) and learn new things. As I do this,I post about it here."
} ,
{
"title" : "Blender Viewport Navigation",
"category" : "",
"tags" : "",
"url" : "/resources/notes/blender/interface-overview/",
"date" : "",
"content" : "Back to Blender Notes IndexThese notes are based on the Viewport Navigation video tutorial.Status BarAt the very bottom of the window there is a Status Bar that indicates theactions that certain mouse clicks will perform, in the context of which hot keysyou are pressing, and which objects your mouse cursor is pointing at in thewindow.You can toggle the status bar from the ‘Window’ menu using the ‘Show Status Bar’option.Default WorkspacesBlender provides default Workspaces for common workflows. These are: Layout - general workspace to preview your scene and objects Modeling - For modification of geometry by modeling tools Sculpting - For modification of meshes by sculpting tools UV Editing - Mapping of image texture coordinates to 3D surfaces Texture Paint - Tools for coloring image textures in the 3D Viewport Shading - Tools for specifying material properties for rendering Animation - Tools for making properties of objects dependent on time Rendering - For viewing and analyzing rendering results Compositing - Combining and post-processing of images and renderinginformation Geometry Nodes - For procedural modeling using Geometry Nodes Scripting - Programming workspace for writing scriptsAfter you’ve made modifications to the panels in a workspace, thosemodifications remain even if you switch between workspaces.You can use CTRL + Page Up or CTRL + Page Down to switch between workspaces.Resizing PanelsYou can place your mouse cursor on the boundary of a panel, and you will seethat the cursor icon turns into an adjustment icon. Click and drag to adjust thesize of the panel.Toggle MaximizedYou can maximize certain panels by placing your mouse cursor over them, andthen clicking on CTRL + Spacebar to toggle between a maximized and defaultsized panel.Change Editor TypeYou can click on the ‘Editor Type’ icon in the upper-left corner of a panelto change the type of editor that is showing in the panel.There are many types of editors you can choose from.Splitting AreasIf you right-click on the boundary of a panel, you can choose ‘Swap Areas’ toswap the two panels that share the same boundary.You can also choose to split a panel into two panels horizontally or vertically.The split doesn’t have to be related to the boundary of the panels you wereright-clicking on, you can move your mouse cursor and choose to splitany of the panels.For instance, if you choose ‘Vertical Split’, then a verticalline will display over any of the panels your mouse cursor points over. As soonas you click inside of the panel, the split will take place upon the line youplaced.Removing PanelsIf you wish to remove a panel, right-click on the boundary of two panels andselect ‘Join Areas’. You’ll then be given the option to point at one of thepanels that you want to remove, then left-click to remove it, thus leaving theother panel present.3D ViewportToggle ToolbarYou can use the ‘t’ key to toggle the display of the Toolbar.You can also use SHIFT + Spacebar to view the Popup Toolbar.For more information, see Tool System.Toggle SidebarThere is a 3D Viewport Sidebar that can be toggled using the ‘n’ key.When it’s hidden you can also display it by clicking on the little arrowshown on the top right corner of the 3D Viewport panel.Sidebar Item TransformThe ‘Item’ tab provides a Transform area that displays various properties of thobject that is currently selected, and allows you to click and drag on thevalues for those settings to adjust them.Sidebar ToolThe ‘tool’ tab allows you to modify various settings related to the currentlyselected tool in the Toolbar.ViewThe ‘view’ tab provides you with other 3D Viewport options, such as the settingsfor the 3D Cursor.The 3D Cursor is used to determine where new 3D objects should be placed(spawned), and also acts as a reference for pivoting (used in animations).You can use SHIFT + Right-click to place the 3D Cursor inside of the 3D viewportarea. You can also press SHIFT + S to view a menu of options that lets youplace the 3D Cursor in commonly desired locations, such as the ‘World Origin’which is the center of the entire scene.Pie MenusThere are various 3D Viewport Pie Menus that display around your mouse cursorwhen certain Hotkey combinations are pressed. The previous SHIFT + S examplefor placing the 3D cursor is an example of this. They call it a “Pie” menubecause the options display around a circle, with each option being like aslice of the pie.A pie menu will display after certain hot-keys are pressed, and will remainon the screen until you move your mouse and left-click to select the highlightedoption. If you change your mind, you can right-click to cancel.Optionally you can press and hold the hotkeys, move your mouse, and then releasethe hotkeys with the option you want selected to be executed.TimelineBelow the 3D Viewport there is a Timeline panel. You may need to drag theboundary of this to make it larger, as it is minimized by default.The Timeline is used to control playback of animations. You can use your mousescroll wheel to “zoom in” and “zoom out” of the timeline. You can alsodrag the timeline with your middle mouse button.PropertiesThe Properties panel contains several tabs shown vertically with differenticons, and is used to display the properties of your scene. Tool Properties Active Tool and Workspace Settings - Options for your currently selectedtool in the Toolbar, similar to the ‘Tool’ tab in the Sidebar. Scene Properties Render Properties Output Properties View Layer Properties Scene Properties World Properties Object Properties Physics Properties Object Constraint Properties Object Data Properties Texture Properties ScenesYou can create more than one scene in a single Blender file."
} ,
{
"title" : "Intro to Vue.js",
"category" : "",
"tags" : "",
"url" : "/resources/notes/vue/intro/",
"date" : "",
"content" : "The Vue Dev Tools for Chrome extension is recommended.The Vue InstanceYou can insert a Vue component inside of an existing front-end, using a CDN toinclude the Vue library using a <script> tag.index.html<!DOCTYPE html><html><head> <meta name="viewport" content="width=device-width, initial-scale=" /> <title>Product App</title></head><body> <div id="app"> <h1>{{ product }}</h1> </div> <script src="https://cdn.jsdelivr.net/npm/vue"></script> <script src="main.js"></script></body></html>main.jsvar app = new Vue({ el: '#app', data: { product: 'Socks' }})In the template, anything between {{ and }} can be an expression to outputsomething into the page. It can call a function, use logic, or just referencevariables.In the console we can manipulate the value of the data like so:app.product = 'Coat'Doing this will update the product name displayed by the Vue component.Attribute BindingYou can bind the value in your data object with attributes in the DOM.<img v-bind:src="image" />var app = new Vue({ el: '#app', data: { product: 'Socks', image: './assets/duck.jpg' }})In this case the image URL is bound to the ‘src’ attribute of the ‘img’ element.You can do the same with the alt attribute.<img v-bind:src="image" v-bind:alt="altText" />var app = new Vue({ el: '#app', data: { product: 'Socks', image: './assets/duck.jpg', altText: 'It\'s a duck' }})There is also a short-hand method for binding where you just use the colon (:)like so:<img :src="image" :alt="altText" />Conditional RenderingVue also supports directives for conditional rendering.<p v-if="inStock">In Stock</p><p v-else>Out of Stock</p>data: { product: 'Socks', inStock: true}You can also use else-if conditionals.<p v-if="inventory > 10">In Stock</p><p v-else-if="inventory <= 10 && inventory > 0">Almost sold out!</p><p v-else>Out of Stock</p>data: { product: 'Socks', inventory: 4}Instead of using the if-else directives, you can simply use v-show.<p v-show="inStock">In Stock</p>List RenderingIf you want to render an array of data elements, you can use v-for.data: { product: 'Socks', details: ["80% cotton", "20% polyester", "100% masculine"]}<ul> <li v-for="detail in details">{{ detail }}</li></ul>Iterating over ObjectsYou may need to iterate over JSON objects, instead of primitive values such asintegers or strings.data: { product: 'Socks', variants: [ { id: 2234, color: "green" }, { id: 2235, color: "blue" }, ]}<div v-for="variant in variants"> <p>{{ variant.color }}</p></div>Vue needs a unique key for each object to keep track of them, so you’ll want toadd a key attribute to each element.<div v-for="variant in variants" :key="variant.id"> <p>{{ variant.color }}</p></div>If needed, you can also ensure that the index for each item is present with eachiteration.<div class="color-box" v-for="(variant, index) in variants" :key="variant.id" @mouseover="updateProduct(index)"></div>Event HandlingYou can bind function callbacks to events using the v-on directive.<button v-on:click="cart + 1">Add to Cart</button>The above example placed the expression in the template, but you can also pointto a method.<button v-on:click="addToCart">Add to Cart</button>var app = new Vue({ el: '#app', data: { product: 'Socks', }, methods: { addToCart() { this.cart += 1; } }})Shorthand for v-on is available via the @ character. v-on:mouseover is thesame as @mouseover.<p @mouseover="updateProduct(variant.image)">Class & Style BindingYou can apply styles to classes<div class="color-box" v-for="variant in variants" :key="variant.id" :style="{ backgroundColor: variant.color }"></div>You can also disable a button based on a value<button v-on:click="addToCart" :disabled="!inStock" :class="{ disabledButton: !inStock }" >Add to Cart</button>Computed PropertiesComputed Properties calculate a value rather than store a value.var app = new Vue({ el: '#app', data: { product: 'Socks', brand: 'Vue Mastery' }, computed: { title() { return this.brand + ' ' + this.product } }})You may wonder what the difference is between using a computed property vs amethod, as both can provide the same results. The difference is thatcomputed properties are cached based on their reactive dependencies. Acomputed property will only re-evaluate if a dependency has changed…where-as a method will execute every time.ComponentsYou don’t want everything living within a root instance of your Vue app. You’llinstead want to put all your logic for each element in your front-end in aself-contained instance called a Component.Vue.component('product', { template: ` <div class="product"> <!-- ... Our Product HTML all goes in here ... --> </div> `, data() { return { // data goes here } }, methods: { // methods go here }, computed: { // computed properties go here }})There are many ways to create a template in Vue, but in this example above we’reusing backticks to define a template literal.You’ll also notice that our data is now a function. This is to ensure that eachcomponent is not sharing the same ‘data’ element, as the function will return aunique data object.Our main app can look like this now:var app = new Vue({ el: '#app'})And our main HTML can be presented like so:<div id="app"> <product></product><div>PropsTo pass data to components, you define ‘props’ that the component is expectingto receive.Vue.component('product', { props: { premium: { type: Boolean, required: true } }})We can display this ‘premium’ value in our template like so:<p>User is premium: {{ premium }}</p>We can pass the value to our component from the Root instance like so:<div id="app"> <product :premium="premium"></product></div>If we needed to let this prop affect the outcome of a computed value, we canjust refer to it via this.shipping() { if (this.premium) { return "Free" } else { return 2.99 }}Note: Don’t modify the value of props you pass into a component, as propsare overwritten when re-rendering the component. It’s best to use a computedvalue to serve a mutated form of a prop if needed.Communicating EventsSometimes we need to communicate events from one component to our root app. Forexample, if a shopping cart item is added with a button click in one component,and should update a value that is rendered elsewhere in the page. In such casesyou need acustom event.addToCart() { this.$emit('add-to-cart')}For this to be accessible outside of the component, we have to apply a bindingto the component from the root app.<product :premium="premium" @add-to-cart="updateCart"></product><!-- equivalent --><product :premium="premium" v-on:add-to-cart="updateCart"></product>In our root app we can have this method defined:var app = new Vue({ el: '#app', methods: { updateCart() { this.cart += 1; } }})FormsVue.component('product-review', { template: ` <form class="review-form" @submit.prevent="onSubmit"> <p> <label for="name">Name:</label> <input id="name" v-model="name" placeholder="name"> </p> <p> <label for="review">Review:</label> <textarea id="review" v-model="review"></textarea> </p> <p> <label for="rating">Rating:</label> <select id="rating" v-model.number="rating"> <option>5</option> <option>4</option> <option>3</option> <option>2</option> <option>1</option> </select> </p> <p> <input type="submit" value="Submit"> </p> </form> `, data() { return { name: null, review: null, rating: null } }});We’re using the v-model directive to associate form elements with values inour data. You can see we used the .number modifier on the rating to ensurethat the value is interpreted as an integer.You see that on the form itself we declare a @submit event handler that usesthe .preventevent modifier toprevent the page from reloading (the default browser behavior for submitbuttons) when the button is clicked.Next we’ll want to add our onSubmit() method.methods: { onSubmit() { let productReview = { name: this.name, review: this.review, rating: this.rating } this.$emit('review-submitted', productReview) this.name = null this.review = null this.rating = null }}Our method is packing up the current value into the productReview object,emitting the event to any subscribers outside of the component, then resettingthe form to default values.We can subscribe to the emitted event on our component like so:<product-review @review-submitted="addReview"></product-review>We could register this in the parent component like so:data() { return { reviews: [] }},methods: { addReview(productReview) { this.reviews.push(productReview) }}Now directly above our product-review component we can display our reviews:<div> <h2>Reviews</h2> <p v-if="!reviews.length">There are no reviews yet.</p> <ul> <li v-for="review in reviews"> <p>{{ review.name }}</p> <p>Rating: {{ review.rating }}</p> <p>{{ review.review }}</p> </li> </ul></div>Global Event BusA common solution for communicating between components (especially when they arevery far apart like a grandparent-grandchild component relationship) is to usea global event bus.var eventBus = new Vue()In our component, instead of registering an emitter using this.$emit, we’llinstead use eventBus like so:eventBus.$emit('reviews-submitted', productReview)Now we’ll setup a listener for this event in the mounted() function of theproduct component.mounted() { eventBus.$on('review-submitted', productReview => { this.reviews.push(productReview) })}mounted() is a lifecycle hook that is called once the component has mountedto the DOM.This can be a simple solution, using a global event bus, but you’re probablybetter off using Vuex."
} ,
{
"title" : "JavaScript Fundamentals",
"category" : "",
"tags" : "",
"url" : "/resources/notes/javascript/javascript-fundamentals/",
"date" : "",
"content" : "JavaScriptBasic OperatorsJavaScript is designed for interactive webpages, however this tutorial will beprimarily focused on the fundamental building blocks of the JavaScript languageitself. This will better serve as a prerequisite to working with frameworkssuch as jQuery.Common Operators used in JavaScript syntaxAddition>> 6 + 4-> 10Subtraction>> 9 - 5-> 4Multiplication>> 3 * 4-> 12Division>> 12 / 4-> 3Modulus - Remainder after division>> 43 % 10-> 3Order of Operations: PEMDASJavaScript recognizes the standard order of operations in mathematics. The acronym PEMDAS. Parenthesis Exponents Multiplication Division Addition Subtraction// The parenthesis will be evaluated first>> (5 + 7) * 3// Then multiplied>> 12 * 3-> 36>> (3 * 4) + 3 - 12 / 2// Parenthesis first>> 12 + 3 - 12 / 2// Next Division>> 12 + 3 - 6// Next Addition>> 15 - 6// Finally Subtraction-> 9Modulus is contained within the Multiplication (M of PEMDAS) hierarchical level>> 4 + (8 % (3 + 1))Inner parenthesis first>> 4 + (8 % 4)>> 4 + (0)-> 4ComparatorsComparators allow us to compare values, returning a boolean value (true or false)# Greater Than>> 6 > 4-> true# Less Than>> 9 < 5-> falseTo compare equality in JavaScript, it requires a special syntax. We use a double equal sign to receive a boolean value.// Is equal>> 3 == 4-> false// Not equal>> 12 != 4-> true// Greater or equal>> 8 >= -2-> true// Less or Equal>> 10 <= 10-> trueStringsJavaScripts way of handling, storing, and processing flat text.>> "Raindrops On Roses"-> "Raindrops On Roses">> "Whiskers on Kittens"-> "Whiskers on Kittens"Concatenation>> "Raindrops On Roses" + "And" + "Whiskers on Kittens"-> "Raindrops On Roses And Whiskers On Kittens"Concatenation works with numbers and expressions too:>> "The meaning of life is" + 42-> "The meaning of life is 42">> "Platform " + 9 + " and " + 3/4-> "Platform 9 and 0.75"Special CharactersSome characters need backslash notation to “escape” the characters.>> "Flight #:\t921\t\tSeat:\t21c"-> "Flight #: 921 Seat: 21cUse backslash to escape double quotes>> "Login Password:\t\t\"C3P0R2D2\""-> "Login Password: "C3P0R2D2"Double backslash for the backslash itself>> "Origin\\Destination:\tOrlando (MCO) \\ London(LHR)"-> "Origin\Destinaton: Orlando (MCO) \ London(LHR)"Newline character - \nString ComparisonsCheck for matching strings and alphabetical ordering.>> "The Wright Brothers" == "The Wright Brothers"-> true>> "The Wright Brothers" == "Super Mario Brothers"-> false# Javascript is case sensitive>> "Jason" != "jason"-> trueString Length>> "antidisestablishmentarianism".length-> 28# Spaces are contained also>> "Hello Govna!".length-> 12VariablesJavaScript uses variables to store and manage data.>> var trainWhistles = 3>> trainWhistles-> 3Naming VariablesRules and Regulations No spaces in variable names - var my name = 'jason'; No digits prefixing the names - var 3blindmice = []; Underscores are okay - var scored_is_fine = ` ‘3’``; ` Dollar signs are okay - var get$ = ` ‘123’``; ` (why would you though?)CamelCaseBegins with lowercase, words capitalizedvar goodNameBro = 'my name';var mortalKombat2 = []; // uses number at endChanging Variable ContentsThe var keyword isn’t needed after the variable has already been declared in memory>> var trainWhistles = 3>> trainWhistles = 9>> trainWhistles = trainWhistles + 3>> trainWhistles-> 12# similar operation>> trainWhistles += 3-> 15>> trainWhistles = trainWhistles * 2-> 30# again, similar yet shorter operation>> trainWhistles *= 2-> 60Using Variables>> trainWhistles = 3>> "All of our trains have " + trainWhistles + " whistles!"-> "All of our trains have 3 whistles!">> "But the Pollack 9000 has " + (trainWhistles * 3) + "!"-> "But the Pollack 9000 has 9!"Incrementing and Decrementing>> trainWhistles = 3>> trainWhistles++>> trainWhistles-> 4>> trainWhistles = 3>> trainWhistles-->> trainWhistles-> 2Variable ExplorationJavaScript can store anything in variables like strings>> var welcome = "Welcome to the JavaScript Express Line">> var safetyTip = "Look both ways before crossing the tracks">> welcome + "\n" + safetyTip-> "Welcome to the JavaScript Express LineLook both ways before crossing the tracks"Using Variable Names with Strings>> var longString = "I wouldn't want to retype this String every time.">> longString.length-> 49Compare using the length operatorlongWordOne.length > longWordTwo.lengthFinding Specific Characters Within Strings>> var sentence = "Antidisestablishmentarianism">> sentence.length-> 43>> sentence.charAt(11)-> "b"Variables Help Organize Data>> var trainsOperational = 8>> var totalTrains = 12>> var operatingStatus = " trains are operational today.">> trainsOperational + " of " + totalTrains + operationalStatus-> "8 of 12 trains are operational today."FilesScript TagsTo run JavaScript within an HTML file<!-- index.html -><html><head> <script src="trains.js"></script></head><body> <h1>JavaScript Express!</h1></body></html>// trains.js"Train #" + 1 + " is running.""Train #" 2 1 + " is running.""Train #" 3 1 + " is running."This will result in an error. The compiler doesn’t understand what we’ve placed in our file the same way that the console does.We need to use semi-colons at the end of each statement.var totalTrains = 12;var trainsOperational = 8;console.log("There are " + trainOperational + " running trains.";>> "There are 8 running trains."LoopsWhile LoopRuns the code as long as the boolean expression evaluates to true.while (1 == 1) { // do code}var number = 1;while (number <= 5) { console.log(number); number++;}var trainNumber = 1;while (trainNumber <= 8) { console.log("Train #" + trainNumber + " is running"); trainNumber++;}For Loopfor (initialize; condition; do after each loop) { // code}for (var trainNumber = 1; trainNumber <= trainsOperational; trainNumber++) { console.log("Train #" + trainNumber + "is running");}var totalTrains = 12;var trainsOperational = 8;var trainNumber = 1;while (trainNumber <= trainsOperational) { // ...}for (var stoppedTrain = trainsOperational + 1; stoppedTrain <= totalTrains; stoppedTrains++) { // ...}Conditional StatementsIf-Elsevar totalTrains = 12;var trainsOperational = 8;for (var trainNumber = 1; trainNumber <= totalTrains; trainNumber++) { if (trainNumber <= trainsOperational) { console.log("Train #" + trainNumber + "is running"); } else { console.log("Train #" + trainNumber + "is not operational"); }}Else-Ifif (*some condition is true) { // do this code} else if (*some other condition is true*) { // do something else} else { // in all other cases, do this instead}Checking Multiple Conditionsif (trainNumber <= operationalTrains) { console.log("Train #" + trainNumber + "is running.");} else if (trainNumber == 10) { console.log("Train 10 begins running at noon.");} else { console.log("Train #" + trainNumber + "is not operational.");}Nested Conditionalsif (*is a square*) { if (*it's big*) { // make it red } else { // make it blue, because it must be a small square }} else { // since it's not a square, it must be a circle // make it purpose}Complex Conditionals&& - Binary AND - Returns true if both values are true|| - Binary OR - Returns true if either value is true# Both are true>> (11 >= 11) && (-7 < 6)-> true# 9 is not less than 4>> (2 >= 0) && (9 < 4)-> falseReferences CanIUse.com - JavaScript browser support"
} ,
{
"title" : "Links",
"category" : "",
"tags" : "",
"url" : "/resources/links/",
"date" : "",
"content" : "Back to ResourcesCode Courses Pluralsight - Purchased the course site formerly known as CodeSchool.Provides courses for Software Development (HTMl/CSS, Javascript, Ruby, Python,iOS, Git, and Databases), IT Ops, Data Professional, and Information and CyberSecurity Egghead ThinksterSandboxes JsFiddle - Free solution with collaboration features JsBin - Free solution, with upgraded features availablefor $18.90 monthly / $145.50 yearly Plunker - Provides options for chat and collaborationwith other remote developers CodePen Repl.it - Create and share code examples, withexecution and output, for many languages (Javascript, Ruby, Python, PHP, Go,etc.)JavaScript Libraries PrismJS - lightweight, extensible syntax highlighter Shower - HTML presentation engine. See GithubOrganizational Tools Pinboard - Social bookmarking for $11 yearly, alternative to Delicious.$25 yearly and they will store copies of the pages you bookmark (just in casethey might go away) Dragdis - Look a bookmarking platform, but visually oriented. Appearsdesigned for visual inspiration, like Pinterest for the web. Phabricator - Coding development platform similar to Github, created andused by FacebookCollaboration Tools Codeassium - Collaborative code editor, video conferencing, code execution Cloud9 - Cloud / browser based IDE. Includes built in terminal, imageeditor, and language tools. Nitrous - A development platform with a collaborative web IDE. Koding - A social development platform with real-time collaborativefeatures. Requires AWS EC2 instance for back-end. CodeAnywhere - Cross platform Web IDE Kobra - Realtime collaborative coding, with built invideo & voice chat. Not meant to be an IDE.Transpilers EMScripten - Compiles C and C++ into highly-optimizable JavaScript in asm.js format. This lets you run C and C++ on the web at near-native speed,without plugins.Javascript React JS Brief history of JavaScript ModulesContainer Technologies Docker Open Container Initiative CRI-O - container run-time Podman - used to interact with pods and containers Buildah - container build tool OpenShift - Red Hat containerization software "
} ,
{
"title" : "Mac OS X Keyboard Shortcuts",
"category" : "",
"tags" : "",
"url" : "/resources/cheat-sheets/macosx/",
"date" : "",
"content" : "Back to Cheat SheetsSee all at Mac keyboard shortcuts COMMAND + X - Cut COMMAND + C - Copy COMMAND + V - Paste COMMAND + Z - Undo previous command COMMAND + A - Select All Items COMMAND + F - Open a Find window in Finder, or find items in a program COMMAND + M - Minimize current window COMMAND + H - Hide current window COMMAND + OPTION + H - Hide other windows except current COMMAND + TAB - Switch to another running program COMMAND + SHIFT + 3 - Take fullscreen screenshot COMMAND + SHIFT + 4 - Take screenshot of selected area COMMAND + COMMA (,) - Open preferences for current program"
} ,
{
} ,
{
"title" : "React Hooks - Most Used Features",
"category" : "",
"tags" : "",
"url" : "/resources/notes/react/most-used-features/",
"date" : "",
"content" : "Notes from React Hooks - Most Used Features Github repositorygit clone git@github.com:redconfetti/react-youtube-clone.gitgit checkout v1.0Current State of ReactPossibilities that current React components offer us. Class based components Smart Manage state Have access to lifecycle methods Function based components Quite Stupid Do nothing but return some JSX Often you’ll start with a function based component, and once you realize youneed to manage the state, or have access to lifecycle methods, you have torefactor your component to be a Class based component.What are HooksHooks are functions that let you “hook into” React state and lifecycle featuresfrom function components.React provides a few built-in Hooks like useState or useEffect. You can alsocreate your own Hooks to reuse stateful behavior between different components.Hooks are just an addition. You can use them in a few components withoutrewriting existing code. Class based components are here to stay.Introduction of Hooks allows re-using of stateful logic between components. Youcan write your own hooks that can be re-used between components.Rules of HooksHooks are JavaScript functions, but they impose two additional rules: Only call Hooks at the top level. Don’t call Hooks inside loops, conditions,or nested functions. Hooks need to be called in the same order each time acomponent renders. Only call Hooks from React function components. Don’t call Hooks fromregular JavaScript functions.FAQWhat is a Hook?A Hook is a special function that lets you “hook into” React features. Forexample, useState is a Hook that lets you add React state to functioncomponents. We’ll learn other React Hooks later.When would I use a Hook?If you write a function component and realize you need to add some state to it,previously you had to convert it to a class. Now you can use a Hook inside theexisting function component.Demo ApplicationWe’re going to use dummy APIs hosted by JSON Placeholder Posts TodosIn our tutorial app, we have our primary App component.// src/components/App.jsimport React from "react"import ResourceList from "./ResourceList"class App extends React.Component { state = { resourceName: "posts" } render() { return ( <React.Fragment> <button onClick={() => this.setState({ resourceName: "posts" })}> Posts </button> <button onClick={() => this.setState({ resourceName: "todos" })}> Todos </button> <ResourceList resourceName={this.state.resourceName} /> </React.Fragment> ) }}export default AppThis has a single state property called resourceName that gets changed whenone of the two buttons are clicked (‘Posts’ or ‘Todos’).We’re passing this resourceName to the ResourceList component.We’re using componentDidMount(), which is a lifecycle method supported byReact. Within this function we’re making the call to the JSON PlaceholderAPI that corresponds to the resource (‘posts’ or ‘todos’), and setting theresponse as resources within the state of our component.componentDidMount() is called the first time our component renders.// src/components/ResourceList.jsimport React from "react"import axios from "axios"class ResourceList extends React.Component { state = { resources: [] } async componentDidMount() { const response = await axios.get( `https://jsonplaceholder.typicode.com/${this.props.resourceName}` ) this.setState({ resources: response.data }) } async componentDidUpdate(prevProps) { if (prevProps.resourceName !== this.props.resourceName) { const response = await axios.get( `https://jsonplaceholder.typicode.com/${this.props.resourceName}` ) this.setState({ resources: response.data }) } } render() { return ( <ul> {this.state.resources.map(resource => ( <li key={resource.id}>{resource.title}</li> ))} </ul> ) }}export default ResourceListWe’re also defining that the same thing occur via componentDidUpdate(), whichruns when our component is updated. It only runs the call to the remote APIif the resourceName in the props changed.Recreate App as a Functional ComponentLet’s recreate our App component as a functional component. We’ve also addeduseState to our import statement from React.// src/components/App.jsimport React, { useState } from "react"import ResourceList from "./ResourceList"const App = () => { const [resourceName, setResourceName] = useState("posts") return ( <React.Fragment> <button onClick={() => this.setState({ resourceName: "posts" })}> Posts </button> <button onClick={() => this.setState({ resourceName: "todos" })}> Todos </button> <ResourceList resourceName={this.state.resourceName} /> </React.Fragment> )}export default AppAs you can see we’ve added a new statement that assigns two values returned fromthe call to useState("posts") to resourceName and setResourceName.resourceName is a variable that represents the current state of resourceName,just like it did before within our state object.The second parameter in the array is a function that can change the state ofresourceName.We’re using Array destructuring in JavaScript to assign the results of the arrayto the two variables locally.// non-destructuring exampleconst arr = [1, 2]const first = arr[0]const second = arr[1]// destructuring exampleconst arr = [1, 2]const [first, second] = arr// < first// > 1// < second// > 2Next we need to update our JSX so that it uses the new variable for our state,as well as use the setResourceName function to update the state when thebuttons are clicked.// src/components/App.jsimport React, { useState } from "react"import ResourceList from "./ResourceList"const App = () => { const [resourceName, setResourceName] = useState("posts") return ( <React.Fragment> <button onClick={() => setResourceName("posts")}>Posts</button> <button onClick={() => setResourceName("todos")}>Todos</button> <ResourceList resourceName={resourceName} /> </React.Fragment> )}export default AppUsing Hooks in ResourceListNow let’s update our ResourceList component. We’re importing useState anduseEffect. As you can see we’ve moved our API call code into a single functionthat is using async and await. The async keyword tells JavaScript thata call inside the function will be asynchronous, and the await keywordlets it know that the axios.get call will be done asynchronously. It runsthe rest of the function after the response is received, thus using oursetResources function to change the state of resources.// src/components/ResourceList.jsimport React, { useState, useEffect } from "react"import axios from "axios"const ResourceList = ({ resourceName }) => { const [resources, setResources] = useState([]) const fetchResources = async resourceName => { const response = await axios.get( `https://jsonplaceholder.typicode.com/${resourceName}` ) setResources(response.data) } return ( <ul> {resources.map(resource => ( <li key={resource.id}>{resource.title}</li> ))} </ul> )}export default ResourceListNow let’s apply the use of useEffect. This function takes in a function asit’s first argument that gets run when the component mounts or updates.You can pass it an array as a second argument to help it support skippingthe call to the function if certain values have not changed. The values in thearray could be either props or state.So in this case the fetchResources function won’t be called unless theresourceName has changed.See Hooks Effect// src/components/ResourceList.jsimport React, { useState, useEffect } from "react"import axios from "axios"const ResourceList = ({ resourceName }) => { const [resources, setResources] = useState([]) const fetchResources = async resourceName => { const response = await axios.get( `https://jsonplaceholder.typicode.com/${resourceName}` ) setResources(response.data) } useEffect(() => { fetchResources(resourceName) }, [resourceName]) return ( <ul> {resources.map(resource => ( <li key={resource.id}>{resource.title}</li> ))} </ul> )}export default ResourceListCustom HooksThe most powerful thing that React Hooks offer are custom hooks.You just need to define a function that starts with the word use. In ourcase we will define useResources. Inside of this function we have movedall our code for generating resources and setResources, definingfetchResources, and useEffect.We’ve also made our method take in resourceName, and return resources.We’ve then added a call to useResources within our functional component.// src/components/ResourceList.jsimport React, { useState, useEffect } from "react"import axios from "axios"const useResources = resourceName => { const [resources, setResources] = useState([]) const fetchResources = async resourceName => { const response = await axios.get( `https://jsonplaceholder.typicode.com/${resourceName}` ) setResources(response.data) } useEffect(() => { fetchResources(resourceName) }, [resourceName]) return resources}const ResourceList = ({ resourceName }) => { const resources = useResources(resourceName) return ( <ul> {resources.map(resource => ( <li key={resource.id}>{resource.title}</li> ))} </ul> )}export default ResourceListIf we want to take this further, we can move our custom hook to another fileand import it.// src/components/useResources.jsimport { useState, useEffect } from "react"import axios from "axios"const useResources = resourceName => { const [resources, setResources] = useState([]) const fetchResources = async resourceName => { const response = await axios.get( `https://jsonplaceholder.typicode.com/${resourceName}` ) setResources(response.data) } useEffect(() => { fetchResources(resourceName) }, [resourceName]) return resources}Now we can greatly simplify our ResourcesList component.// src/components/ResourceList.jsimport React from "react"import useResources from "./useResources"const ResourceList = ({ resourceName }) => { const resources = useResources(resourceName) return ( <ul> {resources.map(resource => ( <li key={resource.id}>{resource.title}</li> ))} </ul> )}export default ResourceListBecause this custom hook takes the name of the resource as it’s argument,we can re-use it in another context with a different resource name.Note that we import React in our component because JSX will not beinterpretted without it. In our useResources.js it isn’t necessary.Refactoring YouTube Clone ProjectYou can clone theYouTube Clone Projectfrom Github.git clone git@github.com:adrianhajdin/project_youtube_video_player.gitgit checkout e7c3525We’re checking out commit e7c3525 because this is where the application didn’thave any of the hooks applied yet.Converting App Component to Function Based// src/App.jsimport React, { useState } from "react"import { Grid } from "@material-ui/core"import { SearchBar, VideoList, VideoDetail } from "./components"import youtube from "./api/youtube"const App = () => { const [videos, setVideos] = useState([]) const [selectedVideo, setSelectedVideo] = useState(null) const handleSubmit = async searchTerm => { const response = await youtube.get("search", { params: { part: "snippet", maxResults: 5, key: "[YOUR_API_KEY]", q: searchTerm } }) setVideos(response.data.items) setSelectedVideo(response.data.items[0]) } const onVideoSelect = video => { this.setState({ selectedVideo: video }) } return ( <Grid style={{ justifyContent: "center" }} container spacing={10}> <Grid item xs={11}> <Grid container spacing={10}> <Grid item xs={12}> <SearchBar onFormSubmit={this.handleSubmit} /> </Grid> <Grid item xs={8}> <VideoDetail video={selectedVideo} /> </Grid> <Grid item xs={4}> <VideoList videos={videos} onVideoSelect={this.onVideoSelect} /> </Grid> </Grid> </Grid> </Grid> )}class App extends React.Component { state = { videos: [], selectedVideo: null } render() { const { selectedVideo, videos } = this.state }}export default AppLeft off at 32:13"
} ,
{
"title" : "MySQL",
"category" : "",
"tags" : "",
"url" : "/resources/cheat-sheets/mysql/",
"date" : "",
"content" : "Back to Cheat SheetsCommand Line Commands# Log into MySQL server as root with password$ mysql -u root -pMySQL Client Commands# quit clientquit;# show list of databasesshow databases;# use databaseuse redconfetti;# show tables in current databaseshow tables;# create user account accessible from localhostCREATE USER 'user'@'localhost' IDENTIFIED BY 'secretpass';# create user account accessible from any hostCREATE USER 'user'@'%' IDENTIFIED BY 'secretpass';# grant all privileges on database to userGRANT ALL PRIVILEGES ON my_database.* TO 'user'@'%';"
} ,
{
"title" : "NPM",
"category" : "",
"tags" : "",
"url" : "/resources/cheat-sheets/npm/",
"date" : "",
"content" : "Back to Cheat SheetsCommon Commands# Update Node Package Managernpm install -g npm# Initialize an NPM managed project, creating packages.jsonnpm init# Install Package, and add to packages.json as dependencynpm install package-name --save# Uninstall Packagenpm uninstall package-name --save# List installed packages# Also points out installed packages that are not dependencies as "extraneous".npm ls# Remove Installed Packages that are not dependenciesnpm prune# List outdated packagesnpm outdated# Install Package as production dependencynpm install --save-prod gulp# Install Package as development dependencynpm install --save-dev gulp# Install Package Globally (used for CLI tools)npm install -g gulp# Detect Global Packages that need updatingnpm outdated -g --depth=0# Publish / Unpublish Module to NPM Registrynpm publishnpm unpublish# Register NPM User Account from Command Line# You can view this user account on the web by inserting your username into this URL:# https://www.npmjs.com/~your-usernamenpm adduser# Display NPM Configurationnpm config ls# Increment the Version of your Packagenpm version patchnpm version majornpm version minornpm version prereleasenpm version preminornpm version premajorNode Version Manager (NVM)# list remote versionsnvm ls-remote# list locally installed versionsnvm listCreating a Node ModuleUsing npm init you can completely setup a packages.json file that defines yourmodule.In the “main” Javascript file specified, index.js, you can define properties onthe “export” object and these properties will be accessible from your package.# index.jsexports.printMsg = function() { console.log("This is a message from the demo package");}Javascript using module dependencyvar demo = require('npm-demo-pkg');demo.printMsg()# // outputs "This is a message from the demo package" to consolePublishing Node ModulesThe name and version are the only fields required in your packages.json file.You have to have a registered user account in the NPM registry. You canaccomplish this using npm adduser.Once this is completed, you simply use npm publish to push your new package upto the NPM registry. To update your package you will need to first update theversion in packages.json, then use npm publish. You can also use the npm versioncommand to increment the version.Semantic VersioningProjects should start out released as version 1.0.0. Anything before 1.0.0, suchas version 0.2.1 would be considered a pre-release (i.e. Alpha, Beta, etc).After the first stable release, the following should apply:Bug fixes and other minor changes: Patch release, increment the last number,e.g. 1.0.1.New features which don’t break existing features: Minor release,increment the middle number, e.g. 1.1.0Changes which break backwards compatibility: Major release, increment the firstnumber, e.g. 2.0.0When you are specifying the version of a package that you want in yourpackages.json file, as a dependency, you can use the following formats tospecify that you only want patch update, no minor or major updates.Update Latest Patch Versions Only 1.4 ~1.4.2 1.4.xUpdate Latest Minor Versions Only 1 1.x ^1.4.0Update to Latest Major Version * (asterisk) xScoped PackagesA scoped package has a name that begins with your username like so:{ "name": "@username/project-name"}You can initialize your package using the –scoped argument like so:npm init --scope=usernameIf you create scoped packages all the time, you can configure NPM to do this bydefault in your ~/.npmrc file.npm config set scope usernameScoped packages are private by default. NPMjs.com requires that you be a paidmember to host your own private packages with them. Public scoped packages arefree however without requiring a membership. You can publish a package, and setit as public for all future publishes, using this command:npm publish --access=publicTo use a scoped package, you simply include the username before the package namelike so:npm install @username/project-name --save// in your project require the package like sovar projectName = require("@username/project-name");"
} ,
{
"title" : "PGP",
"category" : "",
"tags" : "",
"url" : "/resources/cheat-sheets/pgp/",
"date" : "",
"content" : "Back to Cheat SheetsPretty Good Privacy (PGP)See Introduction to GnuPG for more detail.# install using homebrewbrew install gpg# generate your personal keygpg --gen-key# list keysgpg --list-keys# list secret keysgpg --list-secret-keys# encrypt a file (requires specifying recipient)gpg -e -r jsmith@example.com secret.txt# decrypt and view encrypted file contentsgpg -d secret.txt.gpg# decrypt and save file contents to a new filegpg -d -o secret.txt secret.txt.gpg# import a persons public keygpg --import publickey.txt# get ASCII-armored public keygpg --output publickey.txt --armor --export jsmith@example.com"
} ,
{
"title" : "PostgreSQL",
"category" : "",
"tags" : "",
"url" : "/resources/cheat-sheets/postgresql/",
"date" : "",
"content" : "Back to Cheat SheetsThese commands are specific to Postgres installed on a Mac usingHomebrew. See also PostgreSQL SELECT Docs.Command Line Commands# initialize your Postgres database cluster (collection of databases managed by Postgres server)$ initdb /usr/local/var/postgres -E utf8# Start Postgres server manually$ pg_ctl -D /usr/local/var/postgres -l /usr/local/var/postgres/server.log start# Stop Postgres server manually$ pg_ctl -D /usr/local/var/postgres -l /usr/local/var/postgres/server.log stop# Create a user without a corresponding database$ createuser myusername --no-createdb --no-superuser --no-createrole --pwprompt# Create a databse with owner specified$ createdb my_database --owner=myusername# Drop a database$ dropdb my_database# Use PostgreSQL command line client to view default 'postgres' table$ psql postgres# Use PostgreSQL command line client to connect as a specific user, connected to specific database$ psql -U myusername -d my_database# Run an PostgreSQL command with user specified$ psql -c 'CREATE DATABASE my_database WITH OWNER myusername ENCODING 'UTF8';' -d canvas_test# Backup single database to file$ pg_dump my_database > backup_file_path# Restore single database from file$ psql my_database < backup_file_path# Backup entire database cluster$ pg_dumpall > full_backup_file_path# Restore entire database clusterpsql -f full_backup_file_path postgresPSQL Client Commands-- get list of non SQL commands\?-- execute query every 5 secondsselect id from tablename limit 5; \watch 5-- list databases\l\list-- connect to database\c my_database-- list tables in connected database\dt-- list columns on table\d table_name-- quit psql client\q"
} ,
{
"title" : "React",
"category" : "",
"tags" : "",
"url" : "/resources/notes/react/powering-up-with-react/",
"date" : "",
"content" : "CodeSchool Course - Powering Up With ReactLevel 1. First ComponentWhat is ReactReact is a JavaScript library for building user interfaces (UIs). Some peopleuse it as the V in MVC.Why ReactReact was build to solve one problem: building large applications with data thatchanges over time.Conceived at FacebookHeavily used on products made by Facebook and Instagram. Built to simplify theprocess of building complex UIs.After Facebook open-sourced React, it’s now used by Dropbox, AirBNB, Instagram,Netflix, and Paypal.PrerequisitesJavascript BasicsSee JavaScript Road Trip Declaring variables Creating and invoking functionsES2015See ES2015 Class Syntax Arrow functions Spread operatorWhat We’ll LearnWe’ll cover some of the features React offers, including how to: Write React components Render data to the page Make components communicate Handle user events Capture user input Talk to remote serversComponent-based ArchitectureIn React, we solve problems by creating components. If a component gets toocomplex, we break it into smaller, simpler components.We’ll first focus on a comment section added to a page, which features a simpleform where a person can enter their name and comment, then click on the ‘PostComment’ button. Below this is a list of comments. This entire interface isstored inside of a “StoryBox” component.Inside of the StoryBox component is a StoryForm component that contains the formhat users can use to add stories to the feed, as well as a separate Storycomponent for each story displayed in the feed.What is a React ComponentA component in React works similar to JavaScript functions: It generates anoutput every time it is invoked.With a React component a render() method is called, which generates the HTML:<div> <p>Good Morning</p> <p>10:45AM</p></div>10 minutes later we run the render() method again, and instead it generates:<div> <p>Good Morning</p> <p>10:55AM</p></div>The Virtual DOM ExplainedThe virtual DOM is an in-memory representation of real DOM elementsgenerated by React components before any changes are made to the page.When the component is rendered, the HTML that is being generated is the VirtualDOM. The output is then transformed into actual HTML within the browsers DOM.The Virtual DOM in ActionWhy go through the extra step? Because using the Virtual DOM makes the updatesto the actual DOM faster.Virtual DOM diffing allows React to minimize changes to the DOM as aresult of user actions — therefore, increasing browser performance.With the example shown above, the second time the component renders the HTML,the time has changed. The diffing makes the update faster.Creating Our First React ApplicationWe want to simply print a message to the screen using a React component.Components in React are JavaScript classes that inherit from theReact.Component base class./* components.js */// Components are written in upper camel case// Component class inherits from a React base classclass StoryBox extends React.Component { // every component needs a render() function render() { return <div>Story Box</div> }}You don’t have to put quotes around the markup that is being returned, becauseof JSX. It allows us to include HTML in our JavaScript.Now we need to tell our application where to put the result into our web page.Rendering Our First React ComponentWe use ReactDOM to render components to our HTML page as it reads output from asupplied React component and adds it to the DOM.class StoryBox extends React.Component { render() { return <div>Story Box</div> }}// first arg: Invokes the StoryBox component (no quotes needed)// second arg: The target container where component will be rendered toReactDOM.render(<StoryBox />, document.getElementById("story-app"))Referencing the ComponentEvery time we create a new React component, we use it by writing an elementnamed after the class.class StoryBox → <StoryBox />This is JSX syntax, so the case used with the component name is important.Application Structure/* components.js */ReactDOM.render(<StoryBox />, document.getElementById("story-app"))The page must contain a DIV with the correct ID.<!-- index.html --><!DOCTYPE html><html> <body> <div id="story-app"></div> </body></html>That’s all there is to creating a component. Now we just need to add libraries.<!-- index.html --><!DOCTYPE html><html> <body> <div id="story-app"></div> <!-- Supports React Components --> <script src="vendors/react.js"></script> <script src="vendors/react-dom.js"></script> <!-- Provides support for ES2015 and JSX code --> <script src="vendors/babel.js"></script> <script type="text/babel" src="components.js"></script> </body></html>Project Folder index.html components.js vendors react.js react-dom.js babel.js Our React Application FlowTo clarify, here is what takes place when we load a page with a React component: index.html is opened dependencies defined in index.html are loaded StoryBox component is rendered, then applied to the actual DOM elementQuick Recap on React React was built to solve one problem: building large applications with datathat changes over time In React, we write apps in terms of components We use JavaScript classes when declaring React components Components must extend the React.Component class and must contain a render()method We call the ReactDOM.render() function to render components to a webpageLevel 1 - Section 2No Quotes Around MarkupThe markup we use when writing React apps is not a string. This markup iscalled JSX (JavaScript XML).class StoryBox extends React.Component { render() { // HTML elements are written in lowercase return <div>Story Box</div> }}ReactDOM.render( // React components are written in upper camelcase <StoryBox />, document.getElementById("story-app"))JSX is just another way of writing JavaScript with a transpile step.// JSX<div>Story Box</div>// Transpiled JSX CodeReact.createElement('div', null, 'Story Box')// JSX<StoryBox />// Transpiled JSX CodeReact.createElement(StoryBox, null)This may take some getting used to, but will feel natural after gainingconfidence in using it.Getting Used to the JSX SyntaxJSX looks similar to HTML, and it is ultimately transformed into JavaScript.class StoryBox extends React.Component { render() { return ( <div> <h3>Stories App</h3> <p className="lead">Sample paragraph</p> </div> ) }}Notice above how the attribute on the paragraph is ‘className’ instead of‘class’. This is because ‘class’ is a reserved JavaScript keyword.// Transpiled JSX codeReact.createElement( "div", null, React.createElement("h3", null, "Stories App"), React.createElement("p", { className: "lead" }, "Sample paragraph"))Browsers do not understand JSX, but they do understand JavaScript. They are ableto run the transpiled JavaScript that is created from the JSX, which then isapplied as HTML within the DOM.<div data-reactroot> <div> <h3>Stories App</h3> <p class="lead">Sample paragraph</p> </div></div>Using the Date Object in JSXHere, we’re displaying the current time using JavaScript’s native Date objectand JSX.class StoryBox extends React.Component { render() { const now = new Date() return ( <div> <h3>Stories</h3> <p className="lead">Current time: {now.toTimeString()}</p> </div> ) }}Code written within curly braces gets interpreted as literal JavaScript in JSX.Iterating Arrays in JSXHere, we’re displaying a list of elements using JSX and JavaScript’s nativemap function.class StoryBox extends React.Component { render() { // ... const topicsList = ["HTML", "JavaScript", "React"] return ( <div> <ul> {topicsList.map(topic => ( <li>{topic}</li> ))} </ul> </div> ) }}In the above code, the JSX is converted to:<li>HTML</li><li>JavaScript</li><li>React</li>Quick Recap on JSX JSX stands for JavaScript XML. JSX markup looks similar to HTML, but ultimately gets transpiled toJavaScript function calls, which React will know hot to render to thepage. Code written within curly braces are interpreted as literal JavaScript It is a common pattern to map arrays to JSX elements.Level 2 - Talk Through PropsThe App We’re BuildingWe are building a commenting engine that will allow visitors to post commentson a blog post, picture, video, etc. This will allow users to interact with eachother, provide social commentary, etc.Adding Components to Our Comments AppWhat the structure of our React app should look like. CommentBox as the root component Comment as the re-usable component for each comment displayedPattern for Adding New ComponentsThere are some common things we always do when creating new components.// New class inherits from React.Componentclass NewComponent extends React.Component { render() { // render method must return JSX return ( ... ); }}Coding the Comment ListLet’s start with an HTML mockup and identify potential components by looking atthe markup.Here is a mockup of the comment box HTML:<div class="comment-box"> <h3>Comments</h3> <h4>class="comment-count">2 comments</h4> <div class="comment-list"> <!-- each comment goes here --> </div></div>Here is an isolated example of what the comment component will render:<div class="comment"> <p class="comment-header">Anne Droid</p> <p class="comment-body"> I wanna know what love is... </p> <div class="comment-footer"> <a href="#" class="comment-footer-delete"> Delete Comment </a> </div></div>Writing the Comment ComponentThe Comment component renders the markup for each comment, including itsauthor and body.class Comment extends React.Component { render() { return ( <div className="comment"> <p className="comment-header">Anne Droid</p> <p className="comment-body">I wanna know what love is...</p> <div className="comment-footer"> <a href="#" className="comment-footer-delete"> Delete Comment </a> </div> </div> ) }}We can now reference this component in JSX as <Comment />.Writing the CommentBox ComponentNow we’ll declare the CommentBox component and use the previously declaredComment component.class CommentBox extends React.Component { render() { return ( <div className="comment-box"> <h3>Comments</h3> <h4> className="comment-count">2 comments</h4> <div className="comment-list"> <Comment /> <Comment /> </div> </div> ) }}As you can see here, we’re using the Comment component twice. The only problemhere is that all our comments look thesame.React Components Accept ArgumentsArguments passed to components are called props. They look similar toregular HTML element attributes.class CommentBox extends React.Component { render() { return ( <div className="comment-box"> <h3>Comments</h3> <h4> className="comment-count">2 comments</h4> <div className="comment-list"> <Comment author="Morgan McCircuit" body="Great picture!" /> <Comment author="Bending Bender" body="Excellent stuff" /> </div> </div> ) }}Reading Props in the Comment ComponentArguments passed to components can be accessed using the this.props object.class Comment extends React.Component { render() { return ( <div className="comment"> <p className="comment-header">{this.props.author}</p> <p className="comment-body">{this.props.body}</p> <div className="comment-footer"> <a href="#" className="comment-footer-delete"> Delete Comment </a> </div> </div> ) }}Quick Recap on PropsWe just covered a lot of content — here’s a summary of what we learned. Convert HTML mockup to React components Created two components: CommentBox and Comment How to pass arguments to components using props Props look like HTML element attributesProblem: Props Aren’t Dynamic YetWe are passing literal strings as props, but what if we wanted to traverse anarray of objects? In the real world werarely work with hardcoded values.JavaScript Object ArraysTypically, when we consume data from API servers, we are returned object arrays.const commentList = [ { id: 1, author: "Morgan McCircuit", body: "Great picture!" }, { id: 2, author: "Bending Bender", body: "Excellent stuff" }]Mapping an Array to JSXWe can use JavaScript’s map function to create an array with Commentcomponents.class CommentBox extends React.Component { // ... // Underscore helps distinguish custom methods from React methods _getComments() { const commentList = [ { id: 1, author: "Morgan McCircuit", body: "Great picture!" }, { id: 2, author: "Bending Bender", body: "Excellent stuff" } ] return commentList.map(() => { return <Comment /> }) }}Passing Dynamic PropsThe callback to map takes an argument that represents each element from thecalling object.class CommentBox extends React.Component { // ... _getComments() { const commentList = [ { id: 1, author: "Morgan McCircuit", body: "Great picture!" }, { id: 2, author: "Bending Bender", body: "Excellent stuff" } ] return commentList.map(comment => { return <Comment author={comment.author} body={comment.body} /> }) }}Anything in curly braces is interpreted as literal JavaScript.<img src={this.props.avatarUrl} alt={`${this.props.author}'s picture`} />Using Unique Keys on List of ComponentsSpecifying a unique key when creating multiple components of the same typecan help improve performance. It helps React track which element is whichwithin the loop.<Comment author={comment.author} body={comment.body} key={comment.id} />Using the _getComments() methodWe’ll store the returned value in a variable named comments and use it fordisplay purposes.class CommentBox extends React.Component { render() { const comments = this._getComments(); return( <div className="comment-box"> <h3>Comments</h3> <h4 className="comment-count">{comments.length} comments</h4> <div className="comment-list"> // JSX knows how to render arrays of components {comments} </div> </div> ); } _getComments() { ... }}Incorrect Grammar on the Comments TitleThe title has incorrect grammar in some cases. When title says ‘3 comments’or ‘2 comments’, it’s fine, but when itsays ‘1 comments’ it’s incorrect grammar.Fixing the Title With Comment CountLet’s write a new method called _getCommentsTitle() that handles the pluralcase in our title.class CommentBox extends React.Component { render() { const comments = this._getComments() return ( // ... <h4 className="comment-count"> {this._getCommentsTitle(comments.length)} </h4> // ... ) } _getCommentsTitle(commentCount) { if (commentCount === 0) { return "No comments yet" } else if (commentCount === 1) { return "1 comment" } else { return `${commentCount} comments` } }}The title now handles different quantities of comments accordingly.Quick Recap on Dynamic Props How to pass dynamic props using variables How to map object arrays to JSX arrays for display purposes Used JavaScript to handle plural case on the titleLevel 3 - Component StateShow and Hide CommentsWe’d like to add a button to the page that will let users toggle the comments.At top of CommentBox it displays the number of comments: “3 Comments”, and alsowill have a ‘Show Comments’ button. The comments are hidden until this button isclicked on.Once the comments are displayed, the button changes to ‘Hide Comments’. How canwe show and hide comments based on button clicks?Different Ways to Manipulate the DOM Direct DOM Manipulation jQuery, Backbone, etc. Indirect DOM Manipulation ReactDirect DOM ManipulationOne way to manipulate the DOM API is by modifying it directly via JavaScriptin response to browser events.Events → DOM Updates/* jquery example */$(".show-btn").on("click", function() { $(".comment-list").show()})$(".hide-btn").on("click", function() { $(".comment-list").hide()})Indirect DOM ManipulationIn React, we don’t modify the DOM directly. Instead, we modify a componentstate object in response to user eventsand let React handle updates to the DOM.Events → Update State → DOM UpdatesWe modify the component state, and then let React handle the updates.render() { if (this.state.showComments) { // code displaying comments } else { // code hiding comments }}How to Use State in a ComponentThe state is a JavaScript object that lives inside each component. We canaccess it via this.state.class CommentBox extends React.Component { render() { const comments = this._getComments() if (this.state.showComments) { // add code for displaying comments } return ( <div className="comment-box"> <h4 className="h4">{this._getCommentsTitle(comment.length)}</h4> <div className="comment-list">{comments}</div> </div> ) }}Showing Comments Only if State Is trueclass CommentBox extends React.Component { render() { const comments = this._getComments() let commentNodes if (this.state.showComments) { commentNodes = <div className="comment-list">{comments}</div> } return ( <div className="comment-box"> <h4 className="h4">{this._getCommentsTitle(comment.length)}</h4> {commentNodes} </div> ) }}Hiding Comments on the Initial StateWe set the initial state of our component in the class constructor.class CommentBox extends React.Component { constructor() { super() this.state = { showComments: false } } render() { // ... }}When defining a constructor, super() must be called to ensure that theReact.Component constructor behavior is kept intact.How to Update a Component’s StateWe don’t assign to the state object directly — instead, we call setState by passing it an object.// wrong, will not workthis.state.showComments = true// updates showComments propertythis.setState({ showComments: true })Calling setState will only update the properties passed as an argument,not replace the entire state object.Causing State ChangeState changes are usually triggered by user interactions with our app.Things that could cause state change: Button clicks Link clicks Form submissions AJAX requests And more!Handling Click EventsLet’s add a button that will toggle the showComments state when a click event isfired.class CommentBox extends React.Component { render() { // ... return( ... <button onClick={this._handleClick.bind(this)}>Show comments</button> ... ); } _handleClick() { this.setState({ showComments: !this.state.showComments }); }}Button Text Logic Based on StateWe can switch the button text based on the component’s state.class CommentBox extends React.Component { render() { // ... let buttonText = "Show comments" if (this.state.showComments) { buttonText = "Hide comments" // ... } return ( // ... <button onClick={this._handleClick.bind(this)}>{buttonText}</button> // ... ) }}Demo: Hide and Show CommentsOur app shows and hides comments when the button is clicked.Quick Recap on State State represents data that changes over time. We declare and initial state in the component’s constructor. We update state by calling this.setState(). Calling this.setState() causes our component to re-render.Level 4 - Synthetic EventsAdding New CommentsWe want to let users add new comments to our app. We will call the new componentCommentForm, and it will provide input fields for the user to provide theirname, the comment text, and then click on ‘Post Comment’.New Component: CommentFormCommentForm is a new component that will allow users to add comments to our app.It will be a child of the CommentBox, displayed above the list of Comments.Coding the CommentForm Componentclass CommentForm extends React.Component { render() { return ( <form className="comment-form"> <label>Join the discussion</label> <div className="comment-form-fields"> <input placeholder="Name:" /> <textarea placeholder="Comment:" /> </div> <div className="comment-form-actions"> <button type="submit">Post comment</button> </div> </form> ) }}Adding an Event Listener to Our FormTo add an event listener to the form, we use the onSubmit prop and pass ahandler to it.class CommentForm extends React.Component { render() { return ( <form className="comment-form" onSubmit={this._handleSubmit.bind(this)}> // ... <input placeholder="Name:" /> <textarea placeholder="Comment:" /> // ... </form> ) } _handleSubmit(event) { // prevents page from reloading when form is submitted event.preventDefault() }}Problem: Can’t Access User Input in handleSubmit()We still need a way to access the name and comment field values within the_handleSubmit() function.Accessing Form Data from HandlerWe can use refs for assign form values to properties on the componentobject.<input placeholder="Name:" ref={(input) => this._author = input} /><textarea placeholder="Comment:" ref={(textarea) => this._body = textarea}></textarea>We’ll use these refs to access values from the input elements.class CommentForm extends React.Component { render() { return ( <form className="comment-form" onSubmit={this._handleSubmit.bind(this)}> // ... <input placeholder="Name:" ref={input => (this._author = input)} /> <textarea placeholder="Comment:" ref={textarea => (this._body = textarea)} /> // ... </form> ) } _handleSubmit(event) { // prevents page from reloading when form is submitted event.preventDefault() }}What Setting the refs is Actually Doing<input placeholder="Name:" ref={input => (this._author = input)} />This is the same as:<input placeholder="Name:" ref={function(input) { this._author = input }.bind(this)}/>The DOM element itself is passed into the callback as ‘input’, with theCommentForm passed as *this* via thebind() call.You may be wondering, who calls this function? React runs ref callbacks onrender.Passing the User Input to the CommentBoxclass CommentForm extends React.Component { render() { return ( // ... <input placeholder="Name:" ref={(input) => this._author = input} /> <textarea placeholder="Comment:" ref={(textarea) => this._body = textarea}></textarea> // ... ); } _handleSubmit(event) { event.preventDefault(); // these are populated from refs in JSX let author = this._author; let body = this._body; // this method will be passed as an argument from the parent CommentBox this.props.addComment(author.value, body.value); }}Data About Comments Lives in CommentBoxThis is a common pattern with React, where we have to pass references to childcomponents. The array of comments is part of the CommentBox component, so weneed to propagate new comments from CommentForm over to CommentBox.Propagating data about a new comment to CommentBox is simple. You just pass it acallback prop.Using CommentForm to Add CommentsFunctions in JavaScript are first-class citizens, so we can pass them asprops to other components.class CommentBox extends React.Component { render() { return ( <div className="comment-box"> <CommentForm addComment={this._addComment.bind(this)} /> // ... </div> ) } // this method gets triggered by CommentForm when a new comment is added _addComment(author, body) {}}Adding Functionality to Post Commentsclass CommentBox extends React.Component { render() { return ( <div className="comment-box"> <CommentForm addComment={this._addComment.bind(this)} /> // ... </div> ) } _addComment(author, body) { const comment = { id: this.state.comments.length + 1, author, body } this.setState({ comments: this.state.comments.concat([comment]) }) }}We are using concat() instead of push(), because concat() returns a newreference to the array, instead of mutating the existing array. This helps Reactstay fast, by detecting the change that happened in the array earlier on.This comments array doesn’t exist in the state yet though.Comments Are Not Part of the StateCurrently, we’re defining an array every time the _getComments method iscalled. Let’s move this data to the state.class CommentBox extends React.Component { // ... _getComments() { const commentList = [ { id: 1, author: "Morgan McCircuit", body: "Great picture!" }, { id: 2, author: "Bending Bender", body: "Excellent stuff" } ] // ... }}To dynamically update the component, we need to move the comments list into thecomponents state.class CommentBox extends React.Component { constructor() { super() this.state = { showComments: false, comments: [ { id: 1, author: "Morgan McCircuit", body: "Great picture!" }, { id: 2, author: "Bending Bender", body: "Excellent stuff" } ] } }}Now they are part of the component state.Rendering Comments From the StateLet’s use the comments from the state object to render our component.class CommentBox extends React.Component { // ... _getComments() { return this.state.comments.map(comment => { return ( <Comment author={comment.author} body={comment.body} key={comment.id} /> ) }) }}Review: Event Handling in ReactIn order to ensure events have consistent properties across different browsers,React wraps the browser’s native events into synthetic events, consolidatingbrowser behaviors into one API.Form submission handling might work slightly different for each browser, butReact provides support for the ‘onSubmit’event.Quick Recap We use React’s event system to capture user input, including form submissionsand button clicks. Refs allow us to reference DOM elements in our code after the componenthas been rendered. Parent components can pass callback functions as props to child components toallow two-way communication. Synthetic events are a cross-browser wrapper around a browser’s native eventsystem.Level 5 - Section 1 - Talking to Remote Servers5.1 Using Lifecycle Methods to Load CommentsComments Are StaticIn the real world, we’d want to pull comments from an API instead of hard-codingthe data.class CommentBox extends React.Component { constructor() { super() this.state = { showComments: false, comments: [ { id: 1, author: "Morgan McCircuit", body: "Great picture!" }, { id: 2, author: "Bending Bender", body: "Excellent stuff" } ] } }}Loading Comments From a Remote ServerLet’s set the initial state of comments as an empty array so we can laterpopulate it with data from an API server.class CommentBox extends React.Component { constructor() { super() this.state = { showComments: false, comments: [] } }}Adding jQuery as a DependencyjQuery will help us make Ajax requests. We can download it from the jQuerywebsite and include it in our HTML page. index.html components.js vendors react.js react-dom.js babel.js jquery.js <!DOCTYPE html><html> <body> <div id="story-app"></div> <script src="vendors/react.js"></script> <script src="vendors/react-dom.js"></script> <script src="vendors/jquery.js"></script> <script src="vendors/babel.js"></script> <script type="text/babel" src="vendors/components.js"></script> </body></html>How to Fetch Data in a ComponentLet’s write a class method that will make Ajax requests in the CommentBoxcomponent.class CommentBox extends React.Component { // ... _fetchComments() { jQuery.ajax({ method: "GET", url: "/api/comments", success: comments => { this.setState({ comments }) } }) }}We call the setState method when data is received from the API server. We are using the arrow function because itpreserves the ‘this’ binding to our class.Deciding Where to Call _fetchComments()class CommentBox extends React.Component { render() { // ... } _fetchComments() { // ... }}We cannot call _fetchComments from render(), or else we will get an infiniteoop, because the render() method is used by React when new data must be showninside of the component rendering. fetchComments calls setState, which callsrender().React’s Lifecycle MethodsLifecycle methods in React are function that get called while the component isrendered for the first time or about to be removed from the DOM.We will focus on 3 lifecycle methods. componentWillMount() - called after constructor() componentDidMount() - called after render() componentWillUnmount()For a full list of React’s lifecycle methods, visitReact Component - The Component LifecycleIn React, mounting means rendering for the first time. Unmounting meansgetting removed from the DOM.Fetching Data on the Mounting PhaseThe componentWillMount method is called before the component is rendered tothe page.class CommentBox extends React.Component { componentWillMount() { _fetchComments() } render() { // ... } // ...}Getting Periodic UpdatesIn order to check whether new comments are added, we can periodically check theserver for updates. This is known as polling.Polling Data on the Mounting PhaseThe componentDidMount method is called after the component is rendered tothe page. This is a perfect place to start our polling process.class CommentBox extends React.Component { // ... componentDidMount() { // run comment fetching every 5000 milliseconds (5 seconds) setInterval(() => this._fetchComments(), 5000) }}Updating Component With New CommentsReact optimizes the rendering process by only updating the DOM whenchanges are detected on the resulting markup. When running setState, if theactual state in the Virtual DOM is not modified, no changes occur to the actualpage. New state value after initial Ajax request → DOM change happens No new state value after second periodic Ajax request → No DOM change New state value after third periodic Ajax request → DOM change happensNote: render() is called after each Ajax response because setState is in theresponse function.Memory Leaks on Page ChangePage changes in a single-page app environment will cause each CommentBoxcomponent to keep loading new comments every five seconds, even when they’re nolonger being displayed.With each new view that is loaded in a single page application, without thebrowser actually reloading the rendered page, the setInterval() method sets upyet another interval timer that makes the same request every 5 seconds.Preventing Memory LeaksEach component is responsible for removing any timers it has created. We willremove the timer on the componentWillUnmount method.class CommentBox extends React.Component { // ... componentDidMount() { this._timer = setInterval(() => this._fetchComments(), 5000) } componentWillUnmount() { clearInterval(this._timer) }}This will ensure that the timer is removed when the component is about to beremoved from the DOM.Memory Leak is GoneOur app can be freely navigated through now, without causing multipleunnecessary calls to the API.Reviewing the Steps for Loading Comments componentWillMount() is called. render() is called and CommentBox is mounted. “No comments yet” displayed. Component waits for API response and when it is received, setState() iscalled, causing render() to be called again. componentDidMount() is called, causing this._fetchComments to betriggered every five seconds. componentWillUnmount() is called when the component is about to be removedfrom the DOM and clears the fetchComments timeout.Quick Recap on Lifecycle MethodsLifecycle methods in React are functions that get called during certain phasesthat components go through. componentWillMount() is called before the component is rendered. componentDidMount() is called after the component is rendered. componentWillUnmount() is called immediately before the component isremoved from the DOM.Level 5 - Section 2 - Adding and Deleting Comments on the Server SideDeleting CommentsOur comments have a Delete Comment button now, but no delete actions areassociated to it.Deleting from the APIThe CommentBox component needs a new method to delete individual comments.class CommentBox extends React.Component { // ... _deleteComment(comment) { jQuery.ajax({ method: "DELETE", url: `/api/comments/${comment.id}` }) }}Updating the Comment ListWe will not wait for the API request to be finished before updating thecomponent’s state. We will give our user immediate visual feedback, which isknown as an optimistic update.class CommentBox extends React.Component { // ... _deleteComment(comment) { jQuery.ajax({ method: "DELETE", url: `/api/comments/${comment.id}` }) const comments = [...this.state.comments] const commentIndex = comments.indexOf(comment) comments.splice(commentIndex, 1) this.setState({ comments }) }}We’re using the spread operator to clone the existing array to comments.Taken from MDN web docs:The const declaration creates a read-only reference to a value. It doesnot mean the value it holds is immutable, just that the variable identifiercannot be reassigned. For instance, in the case where the content is anobject, this means the object’s contents (e.g., its parameters) can be altered.splice() is used to add or remove items from an array. SeeArray.prototype.splice(). The first argument is the location to begin, thesecond is the number of items to delete.Passing a Callback Prop to CommentEvents are fired from the Comment component. Since the event handler isdefined on the parent component CommentBox, we’ll pass it as a prop namedonDelete.class CommentBox extends React.Component { // ... _getComments() { return this.state.comments.map(comment => { return ( <Comment key={comment.id} comment={comment} onDelete={this._deleteComment.bind(this)} /> ) }) }}Adding an Event Listener to the Delete ButtonLet’s add an event listener to the Delete Comment button and call theonDelete callback prop.class Comment extends React.Component { render() { return ( // ... <a href="#" onClick={this._handleDelete.bind(this)}> Delete comment </a> // ... ) } _handleDelete(event) { event.preventDefault() this.props.onDelete(this.props.comment) }}Inside of _handleDelete() we’re ensuring that the page isn’t reloaded whenthe link is clicked. Then we’re calling the function that was passed to thecomponent as ‘onDelete’, and passing it the current comment.Adding a Confirmation to the Delete ButtonLet’s add an if statement and only call the onDelete callback propif confirm was true.class Comment extends React.Component { render() { return ( // ... <a href="#" onClick={this._handleDelete.bind(this)}> Delete comment </a> // ... ) } _handleDelete(event) { event.preventDefault() if (confirm("Are you sure?")) { this.props.onDelete(this.props.comment) } }}confirm() is a native JavaScript function that displays a modal dialog withthe message and two buttons (“OK” and “Cancel”).Comments Aren’t Added to a Remote ServerWe would like to post new comments to a remote server so they can persist acrosssessions.class Comment extends React.Component { // ... _addComment(author, body) { const comment = { id: this.state.comments.length + 1, author, body } this.setState({ comments: this.state.comments.concat([comment]) }) }}The ID shouldn’t be generated on the client side, but should instead come fromthe server side. It’s also not making an Ajax request to sync the comments withthe server side.Posting Comments to a Remote ServerWe learned how to add new comments using a form. Now let’s make sure the newcomments are sent to a remote server so they can be persisted.class Comment extends React.Component { // ... _addComment(author, body) { const comment = { author, body } jQuery.post("/api/comments", { comment }).success(newComment => { this.setState({ comments: this.state.comments.concat([newComment]) }) }) }}Here we’re sending the arguments to the remote API, and then assigning the newcomment which contains the server side generated ID into the comment arraywithin the state object.One-way Control FlowControl flows from higher level components down to child components, forcingchanges to happen reactively. This keeps apps modular and fast. The CommentBox component passes the _deleteComment() method to Comment asa callback The CommentBox component passes the _addComment() method to CommentForm asa callback The CommentBox component passes the author and body props to each CommentcomponentWhen a child component needs to send data back to the parent, it does so viaa callback.Total RecapHere’s a review of the two most important things we learned in this section. Parent components can send data to child components using props. Child components can accept callback functions as props to communicateback with parent components.Followup Screencast: Add a Build System to a React Application Pluralsight - Building Applications with React and Flux Pluralsight - React.js on Rails: Building a Full Stack Web App Pluralsight - Webpack Fundamentals"
} ,
{
"title" : "Rails Testing",
"category" : "",
"tags" : "",
"url" : "/resources/cheat-sheets/rails-tests/",
"date" : "",
"content" : "Back to Cheat SheetsThese may be helpful to some, but truthfully I highly recommend learning how touse Rspec instead. The tests are much easier to read andwrite. You can use Rspec Rails for Rubyon Rails projects, and have the ability to test your controllers separate fromyour views. It even has tests for routes, just in case you might need them.# Run Test Unit Testruby -Itest test/unit/post_test.rb# Run Test Unit Test Methodruby -Itest test/unit/post_test.rb -n test_the_truth# Run a specific RSpec filerspec spec/models/post_spec.rb# Run a specific example group (using line number) in RSpec filerspec spec/models/post_spec.rb:545# Run a specific Cucumber featurecucumber features/manage_posts.feature# Run a specific Cucumber scenario (using line number) in feature filecucumber features/manage_posts.feature:33"
} ,
{
"title" : "Ruby on Rails",
"category" : "",
"tags" : "",
"url" : "/resources/cheat-sheets/rails/",
"date" : "",
"content" : "Back to Cheat SheetsGenerate New Rails App# Generate for API onlyrails new my_app --api# Using MySQL databaserails new_my_app --database=mysql# Using PostgreSQL databaserails new my_app --database=postgresql# Without Turbolinksrails new my_app --skip-turbolinks# Without JavaScriptrails new my_app --skip-javascriptrails new my_app -J# Without Sprocketsrails new my_app --skip-sprocketsrails new my_app -S# Without Testsrails new my_app --skip-test# Don't run Bundle Installrails new my_app --skip-bundle# Preconfigure for app-like JavaScript with Webpack (options: react/vue/angular)rails new my_app --webpack=WEBPACK# Preconfigure for Vue with Webpackrails new my_app --webpack=vueRake Tasks# Display Rails routing tablerake routesActiveRecord# Get name of table associated with modelModel.table_name# Get field/column names from database tableModel.column_namesCapistrano# View available Capistrano tasksbundle exec cap -vT"
} ,
{
"title" : "Raspberry Pi",
"category" : "",
"tags" : "",
"url" : "/resources/cheat-sheets/raspberry-pi/",
"date" : "",
"content" : "Back to Cheat SheetsCommand Line# Start X-Windows# Use CTRL+ALT+DELETE to access menu to exit X-Windowsstartx# Run Raspberry Pi Configuration Toolsudo raspi-config# Shutdown Immediatelyshutdown -h now# Rebootsudo rebootWiringPiThese are commands for the WiringPi tool kit.# read data from all GPIO pinsgpio readall"
} ,
{
"title" : "RBenv",
"category" : "",
"tags" : "",
"url" : "/resources/cheat-sheets/rbenv/",
"date" : "",
"content" : "Back to Cheat Sheets# list all available versionsrbenv install -lrbenv install --list-all# display ruby versions installed (* indicating in use)rbenv versions# display current ruby version in userbenv version# display version of rbenv itselfrbenv --version# install a ruby versionrbenv install 2.7.2# set the global ruby versionrbenv global 2.7.2# set a shell specific version of ruby (as opposed to application / global version)rbenv shell jruby-1.7.1# set the application specific ruby version (written to `.ruby-version`)rbenv local 2.7.2# install shims for all Ruby executables known to rbenvrbenv rehash# display full path to executable that rbenv will invoke for a commandrbenv which irb# list all Ruby versions with the given command installedrbenv whence rackupInstallation / Update with Homebrew# install with homebrewbrew install rbenv ruby-build# initialize / setup to run in your shell rbenv init# update with homebrewbrew upgrade rbenv ruby-build"
} ,
{
"title" : "How Redux Works",
"category" : "",
"tags" : "",
"url" : "/resources/notes/redux/",
"date" : "",
"content" : "Taken from Lynda - How Redux WorksWhat is ReduxThe History of ReduxDan Abramov invented the idea for Redux during a React Europe Conferencepresentation in 2015. Andrew Clark abandoned Flummox, another Fluximplementation, to work with Abramov to complete Redux.FluxA design pattern developed by Facebook. An alternative to MVC, MVP, or MVVM.Models manage the data within an application. Models are presented in Views.Models can feed data to multiple views. When a user interacts with a view,the model may become modified. This can change the data in other views. Thiscan have unexpected consequences in large complex systems.Flux was developed by Facebook, a pattern where data flows in one direction.Action -> Dispatcher -> Store -> ViewFlux is a design pattern, not a library. Libraries that apply this designpattern include Reflux, Flummox, Fluxxor, Alt, Redux, Marty.js, McFly, DeLorean,Lux, Fluxy, and Material Flux.Due to simplicity and ease of use, Redux has won out in the community.How Redux WorksRedux isn’t exactly Flux, it’s Flux-like. Data still flows in one direction,but there is only one store (not multiple). The “single source of truth”.Moularity is achieved by using functions to manage specific leafs and branchesof the state tree.Using functions for modularity comes from The Functional Programming paradigm.Functional Programming Pure functions - Do not cause side affects. Receive input, and return result.Do not modify arguments, global variables, or other state. Immutability - No variables are changed, instead new ones are created. Composition - Ability to put functions together in a way that onefunctions output becomes the next functions input.ExampleLet’s say we want to make a call to getPercent(1,4) and have it returnthe string ‘25%’. getPercent(1,4) convertToDecimal() - returns 0.25 decimalToPercent() - returns ‘25’ addPercentSign() - returns ‘25%’ import { compose } from "redux"const getPercent = compose( addPercentSign, decimalToPercent, convertToDecimal)getPercent(1, 4)In Redux composition is used in the store. The reducer functions that we createto manage parts of the state tree are composed. The action and state is pipedthrough each of these reducers until a state is eventually mutated.Plan a Redux AppActionsIn a Redux application, you want to define your actions. ADD_DAY REMOVE_DAY SET_GOAL ADD_ERROR CLEAR_ERROR FETCH_RESORT_NAMES CANCEL_FETCHING CHANGE_SUGGESTIONS CLEAR_SUGGESTIONSWe want to put these in a file called constants.// src/constants.jsconst constants = { ADD_DAY: "ADD_DAY", REMOVE_DAY: "REMOVE_DAY", SET_GOAL: "SET_GOAL", ADD_ERROR: "ADD_ERROR", CLEAR_ERROR: "CLEAR_ERROR", FETCH_RESORT_NAMES: "FETCH_RESORT_NAMES", CANCEL_FETCHING: "CANCEL_FETCHING", CHANGE_SUGGESTIONS: "CHANGE_SUGGESTIONS", CLEAR_SUGGESTIONS: "CLEAR_SUGGESTIONS"}export default constantsThis is done to make sure that any typos result in an error when working withthese strings that represent the different actions.State allSkiDays -> [] skiDay -> {resort, date, powder, backcountry} goal -> number errors -> [] resortNames.fetching -> boolean resortNames.suggestions -> []// initialState.json{ "allSkiDays": [ { "resort": "Kirkwood", "date": "2016-12-7", "powder": true, "backcountry": false }, { "resort": "Squaw Valley", "date": "2016-12-8", "powder": false, "backcountry": false }, { "resort": "Mt Tallac", "date": "2016-12-9", "powder": false, "backcountry": true } ], "goal": 10, "errors": [], "resortNames": { "fetching": false, "suggestions": ["SquawValley","Snowbird","Stowe","Steamboat"] }}ReducersWe will name the reducer the same thing as the key.Understanding ReducersRun Redux with babel-nodenpm initnpm install babel-cli --save-devnpm install babel-preset-latest --save-devnpm install babel-preset-stage-0 --save-devmkdir -p srcmkdir -p src/storetouch .babelrctouch src/index.jstouch src/constants.jstouch src/initialState.jsontouch src/store/reducers.js// .babelrc{ "presets": ["latest", "stage-0"]}./src/index.js will automatically get run with the ‘npm start’ command.// package.json{ "name": "ski-day-counter", "version": "1.0.0", "description": "", "main": "constants.js", "scripts": { "start": "./node_modules/.bin/babel-node ./src/" }, "author": "", "license": "ISC", "devDependencies": { "babel-cli": "^6.26.0", "babel-preset-latest": "^6.24.1", "babel-preset-stage-0": "^6.24.1" }}// src/index.jsimport C from "./constants"import { allSkiDays, goal } from "./initialState.json"console.log(` Ski Day Counter ================ The goal is ${goal} days Initially there are ${allSkiDays.length} ski days in state Constants (actions) ------------------- ${Object.keys(C).join("\n ")}`)Build Your First ReducerReducers are pure functions that are designed to manage specific part of yourstate object.// src/store/reducers.jsimport C from "../constants"export const goal = (state = 10, action) => { if (action.type === C.SET_GOAL) { return parseInt(action.payload) } else { return state }}// src/index.jsimport C from "./constants"import { goal } from "./store/reducers"const state = 10const action = { type: C.SET_GOAL, payload: 15}const nextState = goal(state, action)console.log(` initial goal: ${state} action: ${JSON.stringify(action)} new goal: ${nextState}`) initial goal: 10 action: {"type":"SET_GOAL", "payload":15} new goal: 15Create object reducers// src/index.jsimport C from "./constants"import { skiDay } from "./store/reducers"const state = nullconst action = { type: C.ADD_DAY, payload: { resort: "Heavenly", date: "2016-12-16", powder: true, backcountry: false }}const nextState = skiDay(state, action)console.log(` initial state: ${state} action: ${JSON.stringify(action)} new State: ${JSON.stringify(nextState)}`)// src/store/reducers.jsimport C from "../constants"export const goal = (state = 10, action) => { if (action.type === C.SET_GOAL) { return parseInt(action.payload) } else { return state }}export const skiDay = (state = null, action) => { if (action.type === C.ADD_DAY) { return action.payload } else { return state }}Console Output: initial state: null action: {"type":"ADD_DAY", "payload":{"resort":"Heavenly","date":"2016-12-16","powder":true,"backcountry":false}} new state: {"resort":"Heavenly","date":"2016-12-16","powder":true,"backcountry":false}Refactor for oneline conditionals// src/store/reducers.jsimport C from "../constants"export const goal = (state = 10, action) => action.type === C.SET_GOAL ? parseInt(action.payload) : stateexport const skiDay = (state = null, action) => action.type === C.ADD_DAY ? action.payload : stateCreate Array ReducersAdding Errors// src/index.jsimport C from "./constants"import { errors } from "./store/reducers"const state = ["user not authorized", "server feed not found"]const action = { type: C.ADD_ERROR, payload: "cannot connect to server"}const nextState = errors(state, action)console.log(` initial state: ${state} action: ${JSON.stringify(action)} new State: ${JSON.stringify(nextState)}`)// src/store/reducers.jsimport C from "../constants"export const goal = (state = 10, action) => { if (action.type === C.SET_GOAL) { return parseInt(action.payload) } else { return state }}export const skiDay = (state = null, action) => { if (action.type === C.ADD_DAY) { return action.payload } else { return state }}export const error = (state = [], action) => { switch (action.type) { case C.ADD_ERROR: // we don't want to mutate the actual state, we need to return a new object // state.push(action.payload) return [...state, action.payload] default: return state }}Console Output: initial state: user not authorized, server feed not found action: {"type":"ADD_ERROR", "payload":"cannot connect to server"} new state: ["user not authorized","server feed not found","cannot connect to server"]Clearing Errors// src/index.jsimport C from "./constants"import { errors } from "./store/reducers"const state = ["user not authorized", "server feed not found"]const action = { type: C.CLEAR_ERROR, payload: 0}const nextState = errors(state, action)console.log(` initial state: ${state} action: ${JSON.stringify(action)} new State: ${JSON.stringify(nextState)}`)// src/store/reducers.jsimport C from "../constants"// ...export const error = (state = [], action) => { switch (action.type) { case C.ADD_ERROR: // we don't want to mutate the actual state, we need to return a new object // state.push(action.payload) return [...state, action.payload] case C.CLEAR_ERROR: return state.filter((message, i) => i !== action.payload) default: return state }}Console Output: initial state: user not authorized, server feed not found action: {"type":"CLEAR_ERROR", "payload":0} new state: ["server feed not found"]Composing ReducersAdding a Day// src/index.jsimport C from "./constants"import { allSkiDays } from "./store/reducers"const state = [ { resort: "Kirkwood", date: "2016-12-15", powder: true, backcountry: false }]const action = { type: C.ADD_DAY, payload: { resort: "Boreal", date: "2016-12-16", powder: false, backcountry: false }}const nextState = allSkiDays(state, action)console.log(` initial state: ${JSON.stringify(state)} action: ${JSON.stringify(action)} new State: ${JSON.stringify(nextState)}`)// src/store/reducers.jsimport C from "../constants"export const skiDay = (state = null, action) => action.type === C.ADD_DAY ? action.payload : state// ...export const allSkiDays = (state = [], action) => { switch (action.type) { case C.ADD_DAY: return [...state, skiDay(null, action)] default: state }}Console Output: initial state: [{"resort":"Kirkwood","date":"2016-12-15","powder":true,"backcountry":false}] action: {"type":"ADD_DAY","payload":{"resort":"Boreal","date":"2016-12-16","powder":false,"backcountry":false}} new State: [ {"resort":"Kirkwood","date":"2016-12-15","powder":true,"backcountry":false}, {"resort":"Boreal","date":"2016-12-16","powder":false,"backcountry":false} ]Avoiding a Duplicate Day// src/store/reducers.jsimport C from "../constants"export const skiDay = (state = null, action) => action.type === C.ADD_DAY ? action.payload : state// ...export const allSkiDays = (state = [], action) => { switch (action.type) { case C.ADD_DAY: const hasDay = state.some(skiDay => skiDay.date === action.payload.date) return hasDay ? state : [...state, skiDay(null, action)] default: state }}Console Output: initial state: [{"resort":"Kirkwood","date":"2016-12-15","powder":true,"backcountry":false}] action: {"type":"ADD_DAY","payload":{"resort":"Boreal","date":"2016-12-16","powder":false,"backcountry":false}} new State: [{"resort":"Kirkwood","date":"2016-12-15","powder":true,"backcountry":false},{"resort":"Boreal","date":"2016-12-16","powder":false,"backcountry":false}]Removing a Day// src/index.jsimport C from "./constants"import { allSkiDays } from "./store/reducers"const state = [ { resort: "Kirkwood", date: "2016-12-15", powder: true, backcountry: false }, { resort: "Boreal", date: "2016-12-16", powder: false, backcountry: false }]const action = { type: C.REMOVE_DAY, payload: "2016-12-15"}const nextState = allSkiDays(state, action)console.log(` initial state: ${JSON.stringify(state)} action: ${JSON.stringify(action)} new State: ${JSON.stringify(nextState)}`)// src/store/reducers.jsimport C from "../constants"export const skiDay = (state = null, action) => action.type === C.ADD_DAY ? action.payload : state// ...export const allSkiDays = (state = [], action) => { switch (action.type) { case C.ADD_DAY: return [...state, skiDay(null, action)] case C.REMOVE_DAY: return state.filter(skiDay => skiDay.date !== action.payload) default: return state }}Console Output: initial state: [{"resort":"Kirkwood","date":"2016-12-15","powder":true,"backcountry":false},{"resort":"Boreal","date":"2016-12-16","powder":false,"backcountry":false}] action: {"type":"REMOVE_DAY","payload":"2016-12-15"} new State: [{"resort":"Boreal","date":"2016-12-16","powder":false,"backcountry":false}]Combine ReducersWe’re going to make use of a method called combineReducers provided by Redux.import C from "../constants"import { combineReducers } from "redux"export const goal = (state = 10, action) => action.type === C.SET_GOAL ? parseInt(action.payload) : stateexport const skiDay = (state = null, action) => action.type === C.ADD_DAY ? action.payload : stateexport const errors = (state = [], action) => { switch (action.type) { case C.ADD_ERROR: return [...state, action.payload] case C.CLEAR_ERROR: return state.filter((message, i) => i !== action.payload) default: return state }}export const allSkiDays = (state = [], action) => { switch (action.type) { case C.ADD_DAY: const hasDay = state.some(skiDay => skiDay.date === action.payload.date) return hasDay ? state : [...state, skiDay(null, action)].sort( (a, b) => new Date(b.date) - new Date(a.date) ) case C.REMOVE_DAY: return state.filter(skiDay => skiDay.date !== action.payload) default: return state }}export const fetching = (state = false, action) => { switch (action.type) { case C.FETCH_RESORT_NAMES: return true case C.CANCEL_FETCHING: return false case C.CHANGE_SUGGESTIONS: return false default: return state }}export const suggestions = (state = [], action) => { switch (action.type) { case C.CLEAR_SUGGESTIONS: return [] case C.CHANGE_SUGGESTIONS: return action.payload default: return state }}const resortNames = combineReducers({ fetching, suggestions})const singleReducer = combineReducers({ allSkiDays, goal, errors, resortNames})export default singleReducerWe can use less code to accomplish the same thing like so:import C from "../constants"import { combineReducers } from "redux"// ...export default combineReducers({ allSkiDays, goal, errors, resortNames: combineReducers({ fetching, suggestions })})Now let’s test this out in our index.js// index.jsimport C from "./constants"import appReducer from "./store/reducers"import initialState from "./initialState.json"let state = initialStateconsole.log(` Initial State ============== goal: ${state.goal} resorts: ${JSON.stringify(state.allSkiDays)} fetching: ${state.resortNames.fetching} suggestions: ${state.resortNames.suggestions}`)state = appReducer(state, { type: C.SET_GOAL, payload: 2})state = appReducer(state, { type: C.ADD_DAY, payload: { resort: "Mt Shasta", date: "2016-10-28", powder: false, backcountry: true }})state = appReducer(state, { type: C.CHANGE_SUGGESTIONS, payload: ["Mt Tallac", "Mt Hood", "Mt Shasta"]})console.log(` Next State ============== goal: ${state.goal} resorts: ${JSON.stringify(state.allSkiDays)} fetching: ${state.resortNames.fetching} suggestions: ${state.resortNames.suggestions}`)Console Output: Initial State ============== goal: 10 resorts: [{"resort":"Kirkwood","date":"2016-12-7","powder":true,"backcountry":false},{"resort":"Squaw Valley","date":"2016-12-8","powder":false,"backcountry":false},{"resort":"Mt Tallac","date":"2016-12-9","powder":false,"backcountry":true}] fetching: false suggestions: Squaw Valley,Snowbird,Stowe,Steamboat Next State ============== goal: 2 resorts: [{"resort":"Mt Tallac","date":"2016-12-9","powder":false,"backcountry":true},{"resort":"Squaw Valley","date":"2016-12-8","powder":false,"backcountry":false},{"resort":"Kirkwood","date":"2016-12-7","powder":true,"backcountry":false},{"resort":"Mt Shasta","date":"2016-10-28","powder":false,"backcountry":true}] fetching: false suggestions: Mt Tallac,Mt Hood,Mt ShastaThe StoreCreate a static build with webpackWe need to install webpack and the webpack dev servernpm install webpack --save-devnpm install webpack-dev-server --save-devWe need to use loaders, which are the instructions that webpack followswhen transpiling our code and creating the bundle.We need to install the Babel loader that converts our ES6 into ES5compatible JavaScript.npm install babel-loader --save-devnpm install babel-core --save-devnpm install json-loader --save-devWe need to create a webpack configuration file - webpack.config.js.// webpack.config.jsmodule.exports = { entry: "./src/index.js"}This tells Webpack which file to start with to perform thebundling on.We have an HTML file under dist/index.html. This is the filewhich the browser will run.<!DOCTYPE html><html> <head> <meta name="viewport" content="minimum-scale=1.0, width=device-width, maximum-scale=1.0, user-scalable=no" /> <meta charset="utf-8" /> <title>Ski Day Counter</title> </head> <body> <div id="react-container"></div> <script src="assets/bundle.js"></script> </body></html>As you can see it references assets/bundle.js, which is the filewe want Webpack to bundle our Javascript into.We can specify this in our webpack configuration.// webpack.config.jsmodule.exports = { entry: "./src/index.js", output: { path: "dist/assets", filename: "bundle.js", publicPath: "assets" }}Next we can configure how the Webpack-Dev should operate.// webpack.config.jsmodule.exports = { entry: "./src/index.js", output: { path: "dist/assets", filename: "bundle.js", publicPath: "assets" }, devServer: { inline: true, contentBase: "./dist", port: 3000 }}The inline modecauses a script to be inserted in the bundle to take care of live reloading.Build messages will appears in the browser console.There is also an iframe mode, where the page is iframed under a notification barwith messages about the build.Next we can configure Webpack to use the Babel loader.// webpack.config.jsmodule.exports = { entry: "./src/index.js", output: { path: "dist/assets", filename: "bundle.js", publicPath: "assets" }, devServer: { inline: true, contentBase: "./dist", port: 3000 }, module: { loaders: [ { test: /\.js$/, exclude: /(node_modules)/, loader: ["babel"], query: { presets: ["latest", "stage-0"] } } ] }}If we import a module that has any ES6 or other emerging JavaScript syntax,it will be included in the bundle.js as ES5 compatible JavaScript. Wewant to run the Babel loader on any file that ends in .js. This is whatthe ‘test’ regular expression does. We’re also choosing to excludeanything loaded from the ‘node_modules’ folder.We also originally setup presets for our Babel-node command.We want to make sure we include the same presets for Babelin our Webpack config.// .babelrc{ "presets": ["latest", "stage-0"]}Note: Stage presents are being deprecatedwith Babel v7.Lastly, we need to add a loader for including JSON files in our bundle.// webpack.config.jsmodule.exports = { entry: "./src/index.js", output: { path: "dist/assets", filename: "bundle.js", publicPath: "assets" }, devServer: { inline: true, contentBase: "./dist", port: 3000 }, module: { loaders: [ { test: /\.js$/, exclude: /(node_modules)/, loader: ["babel"], query: { presets: ["latest", "stage-0"] } }, { test: /\.json$/, exclude: /(node_modules)/, loader: "json-loader" } ] }}In our package.json you can see that all our dependencies have beenput under ‘devDependencies’. You’ll remember that we configuredthe default script for npm start was to use ‘babel-node’ to runour app.// package.json{ "name": "ski-day-counter", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "start": "babel-node ./src" }, "author": "", "license": "ISC", "dependencies": { "redux": "^3.6.0" }, "devDependencies": { "babel-core": "^6.18.0", "babel-loader": "^6.2.6", "babel-preset-latest": "^6.16.0", "babel-preset-stage-0": "^6.16.0", "json-loader": "^0.5.4", "webpack": "^1.13.3", "webpack-dev-server": "^1.16.2" }}Instead we’re going to change this to use the Webpack-Dev-Server instead. "scripts": { "start": "./node_modules/.bin/webpack-dev-server" },All executables installed by NPM are placed in ./node_modules/.bin.Webpack-Dev-Server will automatically start the Express server for us on port 3000.$ npm start> ski-day-counter@1.0.0 start /Users/jasonmiller/Projects/redux/exercises/Ch03/03_01/start> webpack-dev-server http://localhost:3000/webpack result is served from /assetscontent is served from ./distHash: 2afa1e19c1068e8225acVersion: webpack 1.15.0Time: 782ms Asset Size Chunks Chunk Namesbundle.js 286 kB 0 [emitted] mainchunk {0} bundle.js (main) 265 kB [rendered] [0] multi main 40 bytes {0} [built] [1] (webpack)-dev-server/client?http://localhost:3000 4.16 kB {0} [built]...... [96] ./~/redux/lib/compose.js 927 bytes {0} [built] [97] ./src/initialState.json 381 bytes {0} [built]webpack: Compiled successfully.Create a storeWe’ve combined all our reducers into a single appReducer. With Reduxwe don’t have to use this because the store will do this for us.The ‘createStore’ function provided by Redux is used to buildinstance of Redux stores.// src/index.jsimport C from "./constants"import appReducer from "./store/reducers"import initialState from "./initialState.json"import { createStore } from "redux"const store = createStore(appReducer)console.log("initial state", store.getState())By default, just using the appReducer, our initial state willbe created by using all of the default variables we definedin every reducer. For instance our goal value defaults to ‘10’and our allSkiDays was set to an empty array.Once every reducer is invoked once, the default value for thatreducer will be stored as the initial state.The store also provides the dispatch method used to dispatchactions that mutate the state.// src/index.jsimport C from "./constants"import appReducer from "./store/reducers"import initialState from "./initialState.json"import { createStore } from "redux"const store = createStore(appReducer)console.log("initial state", store.getState())store.dispatch({ type: C.ADD_DAY, payload: { resort: "Mt Shasta", date: "2016-10-28", powder: false, backcountry: true }})console.log("next state", store.getState())Now we run our server, access our browser via http://localhost:3000/, and we look at the console.npm startThe createStore method will also accept an object to use for initialState.const store = createStore(appReducer, initialState)After making this modification to index.js and saving the file,our Webpack-Dev-Server will reload the page and we’ll see the new outcome.Console Output:initial state {allSkiDays: Array(3), goal: 10, errors: Array(0), resortNames: {…}} allSkiDays: Array(3) 0: {resort: "Kirkwood", date: "2016-12-7", powder: true, backcountry: false} 1: {resort: "Squaw Valley", date: "2016-12-8", powder: false, backcountry: false} 2: {resort: "Mt Tallac", date: "2016-12-9", powder: false, backcountry: true} length: 3 __proto__: Array(0) errors: [] goal: 10 resortNames: {fetching: false, suggestions: Array(4)} __proto__: Objectnext state {allSkiDays: Array(4), goal: 10, errors: Array(0), resortNames: {…}} allSkiDays: Array(4) 0: {resort: "Mt Tallac", date: "2016-12-9", powder: false, backcountry: true} 1: {resort: "Squaw Valley", date: "2016-12-8", powder: false, backcountry: false} 2: {resort: "Kirkwood", date: "2016-12-7", powder: true, backcountry: false} 3: {resort: "Mt Shasta", date: "2016-10-28", powder: false, backcountry: true} length: 4 __proto__: Array(0) errors: [] goal: 10 resortNames: {fetching: false, suggestions: Array(4)} __proto__: ObjectSubscribe to the storeIt’s possible to subscribe to the store so that your callback methods arecalled anytime the state changes.import C from "./constants"import appReducer from "./store/reducers"import { createStore } from "redux"const store = createStore(appReducer)store.subscribe(() => console.log(store.getState()))store.dispatch({ type: C.ADD_DAY, payload: { resort: "Mt Shasta", date: "2016-10-28", powder: false, backcountry: true }})store.dispatch({ type: C.SET_GOAL, payload: 2})Console Output:{allSkiDays: Array(1), goal: 10, errors: Array(0), resortNames: {…}} allSkiDays: Array(1) 0: {resort: "Mt Shasta", date: "2016-10-28", powder: false, backcountry: true} length: 1 __proto__: Array(0) errors: [] goal: 10 resortNames: {fetching: false, suggestions: Array(0)} __proto__: Object{allSkiDays: Array(1), goal: 2, errors: Array(0), resortNames: {…}} allSkiDays: Array(1) 0: {resort: "Mt Shasta", date: "2016-10-28", powder: false, backcountry: true} length: 1 __proto__: Array(0) errors: [] goal: 2 resortNames: {fetching: false, suggestions: Array(0)} __proto__: ObjectWe can even use a subscriber to store data to local storage.store.subscribe(() => { const state = JSON.stringify(store.getState()) localStorage["redux-store"] = state})We can then load this data from local storage when our application loads.import C from "./constants"import appReducer from "./store/reducers"import { createStore } from "redux"const initialState = localStorage["redux-store"] ? JSON.parse(localStorage["redux-store"]) : {}const store = createStore(appReducer, initialState)window.store = storestore.subscribe(() => { const state = JSON.stringify(store.getState()) localStorage["redux-store"] = state})store.dispatch({ type: C.SET_GOAL, payload: 2})It’s possible to add your store to window, which might be helpful fordebugging, but you don’t want to leave that in place in production.const store = createStore(appReducer, initialState)window.store = storeConsole:> store.getState();< {allSkiDays: Array(0), goal: 10, errors: Array(0), resortNames: {…}}You can view the data in localStorage as well, as a JSON string.Console:> localStorage< Storage {redux-store: "{"allSkiDays":[],"goal":2,"errors":[],"resortNames":{"fetching":false,"suggestions":[]}}", loglevel:webpack-dev-server: "INFO", length: 2}You can clear localStorage by using localStorage.clear().> localStorage.clear()< undefined> localStorage< Storage {length: 0}Now the key is gone. When we refresh, and it makes the first mutation to thestate, the current state is saved to localStorage, and loaded when the pagerefreshes.Unsubscribe from the storeIt’s also possible to turn off store subscriptions using unsubscribe().Let’s say we have this subscription to load the state every time it’s modified,and we’re using a loop (ever 250 milliseconds, 4 times a second) to changethe goal to a random number.import C from "./constants"import appReducer from "./store/reducers"import { createStore } from "redux"const store = createStore(appReducer)store.subscribe(() => console.log(` Goal: ${store.getState().goal}`))setInterval(() => { store.dispatch({ type: C.SET_GOAL, payload: Math.floor(Math.random() * 100) })}, 250)When you call store.subscribe(), it returns a function that can be used tounsubscribe.import C from "./constants"import appReducer from "./store/reducers"import { createStore } from "redux"const store = createStore(appReducer)const unsubscribeGoalLogger = store.subscribe(() => console.log(` Goal: ${store.getState().goal}`))setInterval(() => { store.dispatch({ type: C.SET_GOAL, payload: Math.floor(Math.random() * 100) })}, 250)setTimeout(() => { unsubscribeGoalLogger()}, 3000)The output in the console should be like so, running for only 3 seconds: Goal: 40 Goal: 45 Goal: 58 Goal: 86 Goal: 13 Goal: 91 Goal: 35 Goal: 98 Goal: 9 Goal: 48 Goal: 41 Goal: 47Create middlewareMiddleware gives you control over how actions are dispatched. You can addfunctionality before or after the action is dispatched. We can delay actions, orskip them altogether.Here’s a simple way of establishing our store.// store/index.jsimport C from "../constants"import appReducer from "./reducers"import { createStore } from "redux"export default (initialState = {}) => { return createStore(appReducer, initialState)}Middleware uses a Higher-Order Function,that is, a function that takes a function as an argument, or returns a function.Let’s make a method to log messages to the console. The store is going to beinjected into this function.const consoleMessages = function(store) { return function(next) { return function(action) { // ... } }}We can write this more simply like so using ES6 syntax:const consoleMessages = store => next => action => { // ...}Because each arrow function only have one argument, the parenthesis aren’tnecessary. This function only dispatches the action. This makes surethat we are not breaking the stores current dispatch pipeline.We can add functionality before or after the dispatching of the action asneeded with this function, thus modifying the pipeline… thus middleware.const consoleMessages = store => next => action => { let result result = next(action) return result}Let’s create a console group before we dispatch the action. Console groups allowus to group all of the logs associated with this action into a collapsiblegroup in the console.We replace the createStore method in our exported default method with a callto applyMiddleware. It returns a store with our middleware applied, which wewant to send the createStore function to, which we want to pass ourappReducer and initialState to.// src/store/index.jsimport appReducer from "./reducers"import { createStore, applyMiddleware } from "redux"const consoleMessages = store => next => action => { let result console.groupCollapsed(`dispatching action => ${action.type}`) console.log("ski days", store.getState().allSkiDays.length) result = next(action) let { allSkiDays, goal, errors, resortNames } = store.getState() console.log(` ski days: ${allSkiDays.length} goal: ${goal} fetching: ${resortNames.fetching} suggestions: ${resortNames.suggestions} errors: ${errors.length} `) console.groupEnd() return result}export default (initialState = {}) => { return applyMiddleware(consoleMessages)(createStore)(appReducer, initialState)}Let’s use this with our main code.// src/index.jsimport C from "./constants"import storeFactory from "./store"const initialState = localStorage["redux-store"] ? JSON.parse(localStorage["redux-store"]) : {}const saveState = () => { const state = JSON.stringify(store.getState()) localStorage["redux-store"] = state}const store = storeFactory(initialState)store.subscribe(saveState)store.dispatch({ type: C.ADD_DAY, payload: { resort: "Mt Shasta", date: "2016-10-28", powder: true, backcountry: true }})store.dispatch({ type: C.ADD_DAY, payload: { resort: "Squaw Valley", date: "2016-3-28", powder: true, backcountry: false }})store.dispatch({ type: C.ADD_DAY, payload: { resort: "The Canyons", date: "2016-1-2", powder: false, backcountry: true }})Our console output:dispatching action => ADD_DAY ski days 0 ski days: 1 goal: 2 fetching: false suggestions: errors: 0dispatching action => ADD_DAY ski days 1 ski days: 2 goal: 2 fetching: false suggestions: errors: 0dispatching action => ADD_DAY ski days 2 ski days: 3 goal: 2 fetching: false suggestions: errors: 0Action CreatorsWhat are action creatorsWith Redux the store is only intended to manage state data. It should notcontain application logic such as generating unique ids, reading or writingdata to a persistence layer, changing global variables, or fetching data froma REST endpoint via AJAX request.Your application should use the store, the store should not be your application.So where should our logic go?Action creators are functions that create and return actions, allowing us toencapsulate the logic of our application using functions not objects.// src/index.jsimport storeFactory from "./store"import { addDay } from "./actions"const store = storeFactory()store.dispatch(addDay("Heavenly", "2016-12-22"))If you need to add application specific logic, you could do it within theaction creator.// src/actions.jsimport C from "./constants"export function addDay(resort, date, powder = false, backcountry = false) { // Add app logic here if needed return { type: C.ADD_DAY, payload: { resort, date, powder, backcountry } }}Let’s add an action creator for removing a day.// src/actions.jsimport C from "./constants"export function addDay(resort, date, powder = false, backcountry = false) { // Add app logic here if needed return { type: C.ADD_DAY, payload: { resort, date, powder, backcountry } }}export const removeDay = function(date) { return { type: C.REMOVE_DAY, payload: date }}export const setGoal = goal => ({ type: C.SET_GOAL, payload: goal})Let’s add those to our main script.// src/index.jsimport storeFactory from "./store"import { addDay, removeDay, setGoal } from "./actions"const store = storeFactory()store.dispatch(addDay("Heavenly", "2016-12-22"))store.dispatch(removeDay("2016-12-22"))store.dispatch(setGoal(55))Async actions with redux-thunkYour logic often has to deal with asynchronicity, such as asynchronous requeststo a server. We need to be able to work with action creators that will waitfor a response before dispatching an action.Redux-Thunk is middleware that we can add to our store. Thunks are higher-orderfunctions that give you control over when and how often actions are dispatched.Redux-thunk looks at every action that is dispatched, and if it’s a function, it calls that function.npm install redux-thunk --save// src/store/index.jsimport C from "../constants"import appReducer from "./reducers"import thunk from "redux-thunk"import { createStore, applyMiddleware } from "redux"const consoleMessages = store => next => action => { let result // ... return result}export default (initialState = {}) => { return applyMiddleware(thunk, consoleMessages)(createStore)( appReducer, initialState )}Just like other action creators, Thunks are functions.We’re going to dispatch this just like any other action creator. The differenceis that Thunks don’t return the action object directly, they return anotherfunction.We can call dispatch actions as often as we like from within a Thunk, and we canalso delay the dispatch.Because Thunks get the dispatch function, we have control over when and howoften we’re going to dispatch actions. We can also use getState() to checkthe state before dispatching actions.// src/store/reducers.js// ...export const fetching = (state = false, action) => { switch (action.type) { case C.FETCH_RESORT_NAMES: return true case C.CANCEL_FETCHING: return false case C.CHANGE_SUGGESTIONS: return false default: return state }}// ...// src/actions.jsexport const randomGoals = () => (dispatch, getState) => { if (!getState().resortNames.fetching) { dispatch({ type: C.FETCH_RESORT_NAMES }) setTimeout(() => { dispatch({ type: C.CANCEL_FETCHING }) }, 1500) }}So in this case, if we’re not currently fetching resort names, then we’ll startthe process of fetching them. After a second and a half, it will dispatch theaction to cancel the fetching.// src/index.jsimport storeFactory from "./store"import { randomGoals } from "./actions"const store = storeFactory()store.dispatch(randomGoals())Terminal Output:dispatching action => FETCH_RESORT_NAMESdispatching action => CANCEL_FETCHINGWhat if we dispatched our randomGoals() twice?Terminal Output:dispatching action => FETCH_RESORT_NAMESdispatching action => CANCEL_FETCHINGThis is because the state of ‘fetching’ became true.Autocomplete thunkLet’s imagine that we have an API end-point running on an Express back-end,accessible from /resorts/{search string}. For example, a request to/resorts/hea returns ["Heavenly Ski Resort", "Heavens Sonohara"].We want to use this to provide suggestions of resorts to choose from in asearch field.In order to make an AJAX request to the suggestions server, we’ll use a librarycalled isomorphic-fetch.npm install isomorphic-fetch -saveThis library is an implementation of thewhatwg fetch specification that works inNodeJS and the browser. This is a standard for fetching resources from APIs.// src/index.jsimport storeFactory from "./store"import { suggestResortNames } from "./actions"const store = storeFactory()store.dispatch(suggestResortNames("hea"))// src/actions.jsimport C from './constants'import fetch from 'isomorphic-fetch'export function addDay(resort, date, powder=false, backcountry=false) { return { type: C.ADD_DAY, payload: { resort, date, powder, backcountry } }}// ...export const suggestResortNames = value => (dispatch) { dispatch({ type: C.FETCH_RESORT_NAMES }) fetch('http://localhost:3333/resorts/' + value) .then(response => response.json()) .then(suggestions => { dispatch({ type: C.CHANGE_SUGGESTIONS, payload: suggestions }) }) .catch(error => { dispatch({ addError(error.message) }) dispatch({ type: C.CANCEL_FETCHING }) })}Our function returned by the thunk suggestResortNames could accept both thedispatch and getState methods, but it only needs the dispatch function.Console Output:dispatching action => FETCH_RESORT_NAMESdispatching action => CHANGE_SUGGESTIONSA half second after the first line, the CHANGE_SUGGESTIONS shows up afterthe suggestions are received from the API and added to the state.You can stop the server you’re running and refresh the page, and this willresult in the errors.Console Output:dispatching action => FETCH_RESORT_NAMESdispatching action => ADD_ERRORdispatching action => CANCEL_FETCHINGIncorporating ReactReact app overviewThus far we’ve used Redux to construct the client data layer for ourapplication. It’s now time to implement the user interface layer for our newstore. src components containers ui index.js store index.js reducers.js stylesheets index.scss Menu.scss ShowErrors.scss SkiDayList.scss actions.js constants.js index.js initialState.json routes.js React-Redux helps us integrate our store with our React components.// src/index.jsimport C from "./constants"import React from "react"import { render } from "react-dom"import routes from "./routes"import sampleData from "./initialState"const initialState = localStorage["redux-store"] ? JSON.parse(localStorage["redux-store"]) : sampleDataconst saveState = () => (localStorage["redux-store"] = JSON.stringify(store.getState()))window.React = Reactrender(routes, document.getElementById("react-container"))Let’s bring our store in.// src/index.jsimport C from "./constants"import React from "react"import { render } from "react-dom"import routes from "./routes"import sampleData from "./initialState"import storeFactory from "./store"const initialState = localStorage["redux-store"] ? JSON.parse(localStorage["redux-store"]) : sampleDataconst saveState = () => (localStorage["redux-store"] = JSON.stringify(store.getState()))const store = storeFactory(initialState)store.subscribe(saveState)// to aid with interacting from JS consolewindow.React = Reactwindow.store = storerender(routes, document.getElementById("react-container"))We need to be able to pass the store down to our component tree. React Reduxhas a compnent we can use called Provider that does this.import { Provider } from "react-redux"You can wrap the Provider component around any component tree, and it willplace the store in Context. Context is a feature that will allow any childReact component to interact with the store if needed.render( <Provider store={store}>{routes}</Provider>, document.getElementById("react-container"))This will place the store in context so that it’s accessible by any of the childcomponents listed under routes.Map props to React componentsWe’re going to wire up the ski day count.In the folder structure outlined above, the components in src/components areorganized under either the containers folder or ui folder.The ui folder contains user interface components, which are pure reactcomponents. They communicate solely through properties. They pass data backup to their parents through two-way data binding, and they receive data fromproperties as well.The container folder contains wrappers used to feed data to our components.For example, the following container component is a stateless functionalcomponent that wraps around the SkiDayCount component. Currently the variablesbeing passed are hardcoded. We want this map data from our store to theproperties of the SkiDayCount component.// src/components/containers/skiDayCount.jsimport SkiDayCount from "../ui/SkiDayCount"export default () => <SkiDayCount total={100} powder={25} backcountry={10} />To do this we’ll use connect provided by react-redux that creates acomponent that grabs the store out of state, and can map state from the storeto properties in a child component.We need to define a function that receives the state and returns an objectthat contains keys for the properties of the SkiDayCount component,and values that represent the values we want passed into the component. Thisis defined below in mapStateToProps.The connect function is a higher order function. It takes ourmapStateToProps function as an argument, and it returns afunction that expects the component we wish to wrap as it’s first argument(SkiDayCount).// src/components/containers/skiDayCount.jsimport SkiDayCount from "../ui/SkiDayCount"import { connect } from "react-redux"const mapStateToProps = state => { return { total: state.allSkiDays.length, powder: state.allSkiDays.filter(day => day.powder).length, backcountry: state.allSkiDays.filter(day => day.backcountry).length }}const Container = connect(mapStateToProps)(SkiDayCount)export default ContainerMap dispatch to React componentsNext we want to work with a component that displays errors.If the user chooses to close the error, a ‘CLEAR_ERROR’ action should bedispatched.Below we have our ShowErrors component. We need to replace the errors withour action errors from the data store, and also pass a function to thecomponent for the onClearError property that will dispatch the ‘CLEAR_ERROR’action.// src/components/ShowErrors.jsimport ShowErrors from "../ui/ShowErrors"export default () => ( <ShowErrors errors={["sample error"]} onClearError={index => console.log("todo: clear error at", index)} />)// src/components/ShowErrors.jsimport ShowErrors from "../ui/ShowErrors"import { clearError } from "../../actions"import { connect } from "react-redux"const mapStateToProps = state => { return { errors: state.errors }}const mapDispatchToProps = dispatch => { return { onClearError(index) { dispatch(clearError(index)) } }}export default connect( mapStateToProps, mapDispatchToProps)(ShowErrors)We want to make sure that any errors that occur get recorded in state.Anytime an error occurs, we want to add this to the state.// src/index.jsimport C from "./constants"import React from "react"import { render } from "react-dom"import routes from "./routes"import sampleData from "./initialState"import storeFactory from "./store"import { Provider } from "react-redux"import { addError } from "./actions"// ...const handleError = error => { store.dispatch(addError(error))}// ...window.addEventListener("error", handleError)If we add a call at the bottom of our file now, such as foo = bar,we get Uncaught ReferenceError: bar is not defined added to ourerrors.Console:dispatching action => ADD_ERRORUncaught ReferenceError: bar is not defined(...)Now any errors that occur with our application are displayed in the UI.Map router params to React componentsIn our routes we go to /list-days/ to view all the ski days. If instead we goto /list-days/backcountry we want a filter applied that only shows thebackcountry days, or if we go to /list-days/powder we want to only see thepowder days.So for our ListSkiDays component we’re going to need to pass not only thelist of days, but also the router parameter that represents the filter.Also, if the user double clicks on any of the days, we should remove thatday from the list.Here is how our container is configured, with sample list data and a consolelog statement when an item is double clicked.// src/components/containers/SkiDayList.jsimport SkiDayList from "../ui/SkiDayList"const sample = [ { resort: "Stowe", date: "2017-1-28", powder: false, backcountry: false }, { resort: "Tuckerman's Ravine", date: "2017-1-31", powder: false, backcountry: true }, { resort: "Mad River Glen", date: "2017-2-12", powder: true, backcountry: false }]export default props => ( <SkiDayList days={sample} filter={props.params.filter} onRemoveDay={date => console.log("remove day on", date)} />)An arrow function will return whatever is on the other side of the arrow, so// src/components/containers/SkiDayList.jsimport SkiDayList from "../ui/SkiDayList"import { connect } from "react-redux"import { removeDay } from "../../actions"const mapStateToProps = (state, props) => ({ days: state.allSkiDays, filter: props.params.filter})const mapDispatchToProps = dispatch => ({ onRemoveDay(date) { dispatch(removeDay(date)) }})export default connect(mapStateToProps.mapDispatchToProps)(SkiDayList)Create containers for form componentsConclusionNext steps"
} ,
{
"title" : "Javascript Reference",
"category" : "",
"tags" : "",
"url" : "/resources/notes/javascript/reference/",
"date" : "",
"content" : "AdvancedHere are some advanced Javascript subjects Hoisting Scope Strict Mode this keyword Debugging Best PracticesECMAScript 2015 (ES6)Here are new concepts introduced with ES6: Let Const Arrow function Classes"
} ,
{
"title" : "Regular Expressions",
"category" : "",
"tags" : "",
"url" : "/resources/notes/misc/regular-expressions/",
"date" : "",
"content" : "Regular ExpressionsBreaking the Ice with Regular ExpressionsSection 1 - The String StoryWhy Regular ExpressionsLet’s say you want to validate that a phone number for someone in Orlando isentered into a form. You’d want to validate that the area code is 407 or 321,and optionally ensure that dashes are placed after the area code and exchangenumber.It would require a lot of normal code to accomplish this, but only one call todo so with a regular expression.Understanding a Regular ExpressA regular expression involves a subject string (subject) where we are lookingfor a match, and a regular expression (regex) itself, which is a set ofcharacters that represents rules for searching or matching text in a string ina concise, predictable way.A regular expression engine walks through the pattern and the subject lookingfor a match.chars = '407-555-1212'if (chars.match(/407/)) # returns truechars = '321-555-1212'if (chars.match(/407/)) # returns falseIn this last case, the engine would have parsed through the entire string hopingto find a series of characters with ‘407’. We can introduce an ‘OR’ operatorinto the regular expression.if (chars.match(/407|321/)) # returns trueRegular expressions start after a forward slash, and ends before the endingforward slash. What is in-between is known as a regular expression pattern.Writing regex is really like asking the question, “Does a group of charactersmatch a specific pattern?”.Not Just NumbersRegular expressions are not just meant to match numbers. chars = ‘boat’ if (chars.match(/boat|ship/)) # returns true chars = ‘ship’ if (chars.match(/boat|ship/)) # returns trueWhat are Regular Expressions Used For?Validations on: Phone Numbers Email Password Domain NamesSearching: Words in a sentence Unwanted characters Extracting sections Replacing, cleaning, or formatting textOur First Problem: Collecting NamesWe are assembling a crew./smitty|james/ # gets a match on both names we’ve been givenSome shipmates are reporting their name as ‘ar’, ‘arr’, ‘arrr’, etc. We coulduse the OR operator to add every variant of this as we can. Instead we can usethe + character to match the previous character any number of times./smitty|james|ar+/ # will match 'arrrrrr'Let’s say we have someone who will give the name ‘james’ or ‘jameson’. Our namematcher is getting out of hand, requiring a very long regex. If we look atthings we see that names use alphabetical characters, a-z. We can specify acharacter set by placing it in square brackets. /[a-z]/ will match anyalphabetical character./\[a-z\][a-z]\[a-z\][a-z]\[a-z\][a-z]/ # will match 'smitty', but not 'james', # so this won't work./[a-z]+/ # this will match any number of characters that are alphabetical. Sothis is better, and is very short. Now we have a new name, ‘Blackbeard’. Thisincludes a capital letter, which our pattern ignores./[a-zA-Z]+/There is another way we can do this by adding a modifier after our the endingslash in our regular expression./[a-z]+/iThe ‘i’ modifier means ‘case insensitive’, which will match uppercase andlowercase characters. Keep in mind that modifiers are language specific, socheck the documentation on your regular expression methods./[a-z]+/iNow we see that ‘Captain hook’ would not be matched because it includes a space.We could use an actual space in our regex string, but it’s very literal. Insteadwe can use a ‘\s’ to match a “whitespace character” which could be a space, tab,or newline character./[a-z\s]+/iAs you can see we’ve added the \s into the character set. It doesn’t matterwhere we place it./[\sa-z]+/i # the same thingOur final shipmate signs up, his name is ‘Long John the 3rd’. We’re not matchingon numbers, so this won’t work. Let’s add a number range./[a-z0-9\s]+/iWe can refactor this to use the word meta character - ‘\w’. This is the sameas [a-zA-Z0-9]./[\w\s]+/Since this word meta character takes care of matching uppercase letters, we canremove our ‘i’ modifier. You can use this meta character outside of a characterset even./\w+/Section 2 - Crew EmailsVerifying email addressesSubject: sara@example.comStarts with a word, followed by an @ character, followed by another word/\w@\w/This would match the a@e in the middle ofsara@example.comWe need to add the plus operator to make sure that it continues to match characters./\w+@\w+/This would match sara@example, but not the .com, because of the period / dot//\w+@\w+.\w+/This will match 1 word character repeated 1 or more times, then the @ symbol,then 1 word character repeated 1 or more times, the dot literal, and thenanother word character 1 or more times.This happens to match sara@example!com though, because the . character in aregular expression represents any character except for a newline. It’s like awildcard./\w+@\w+\.\w+/Escaping Special Characters . Maches any character except newline . Matches literal ‘.’ character + Matches 1 or more times + Matches literal ‘+’ character ? Make proceeding pattern optional \? Matches literal ‘?’ character Being More Specific About the Top-Level Domain (TLD)We want to support only certain domains, such as COM, NET, ORG, and EDU/\w+@\w+\.com|net|org|edu/iThis will not work because it expects ‘sara@example.com’ OR ‘net’ OR ‘org’ or‘edu’ (by itself). To make this match ‘sara@example.com’ or ‘sara@example.net’or ‘sara@example.edu’, we need to use parenthesis./\w+@\w+\.(com|net|org|edu)/iParenthesis can create groups, to help with evaluation of the pattern.This is almost complete, however this pattern will still match if othercharacters are added to our string, such as ‘sara@example.comlksh’Introducing AnchorsWe can use the ‘^’ at the beginning of a regular expression to signal that thestring must begin with whatever comes after the ‘^’./^learnbydoing$/The ‘$’ means that the pattern matching must stop at the end of the patternbeing matched./^\w+@\w+\.(com|net|org|edu)$/iThese will ensure that no text comes before or after the email address. Thismeans that if we use this regular expression again and email address such as‘sara@example.comdsfjklwe’ it will not pass the check. Neither would‘@sara@example.com’./^new[\s]?\w+$/iThis can match ‘New Zealand’, ‘New Guinea’, or ‘Newfoundland’. The ‘?’ afterthe newline character signals that it can be optional./^\$\[0-9]+\.[0-9\][0-9]$/Used to pattern match a dollar amount. Begins with literal $, followed by anynumber of digits 0-9, followed by literal period, followed by two digits 0-9,with nothing after.Section 3 - ConfirmativeFind each shipmate’s answer as to whether they are willing to voyage.Subject Strings: ok, i will do it okie dokie Ahooy, Okay!!! why sure, i can go arrr, yes matey My answer me mate, is yes yAccepted Answer Keywords: ok Okay sure yes yStarting the Expression by Matching the First ConfirmationUsing ok literal to directly match “ok”/ok/This would match ‘ok, i will do it’ and ‘okie dokie’. We want the answer to beby itself, and not part of another word.\b - Boundary metacharacter - “whole words only”/\b\w+\b/gWe put the \b before and after the word matcher./\bok\b/Will match ‘ok’ but not ‘okie’.Inefficient Matching with Multiple OR Statements We can use the ‘ ’ (OR operator) to match other words, like ‘okay’. /\bok\b|\bokay\b/This isn’t very efficient. If we can make the ‘ay’ optional after ‘ok’./\bok(ay)?\b/iWhen grouping using parentheses, the question mark makes the entire group optional./pirate\s(ship)?/So far we match ‘ok’ or ‘Okay’, or even ‘okay’ (the i modifier at the endmatches upper or lowercase). Now lets make it so it matches for ‘sure’ as well./\bok(ay)?|sure\b/iThis will match the word ‘ensure’. We can use the parentheses to group thepattern, ensuring that a boundary is applied to all answers./\b(ok(ay)?|sure)\b/iWe can add ‘yes’ to the expression easily now./\b(ok(ay)?|sure|yes)\b/iThis still doesn’t match for ‘y’./\b(ok(ay)?|sure|y(es)?)\b/iAll sailors have entered their taglines. Work like a captain, play like a pirate. Keep calm and say Arr. Shiver me timbers matey Why are pirates pirates? cuz they arr.Taglines do not contain numbers, 40 characters or less, 20 characters or more/[a-z]+/iThis will match a single word, but not white space./[a-z\s]+/iThis will match a word followed by a space several times.Work like a captain, play like a pirate.As you can see this matches up until it hits the comma./[a-z\s,]+/iThis will match the comma, but as you can see we have many characters that wehave to include, which is inefficient.Writing a Shorter Pattern with the NOT OperatorThe ‘^’ means ‘not’ when placed within a character set./[^\d]+/i\d - Any numberSo the above pattern matches anything that is not a number./^[^\d]+$/Outside of a character set, the ‘^’ at the beginning of the expression is usedto anchor the beginning of the subject.Even Shorter with Negated Shorthand Example Negated Shorthand Note [^\d] \D match every character except numbers [^\s] \S match every character except whitespace [^\w] \W match every character except words You do not need a character set when using negated shorthand characters.Problem: We Want to Set Min and Max Characters/^\D+$/This will match any string that doesn’t contain numbers, however it stilldoesn’t meet our requirement of having 40 characters or less, or 20 charactersor more.Matching a Specific Number of Times with Interval Expressions/[a-z]{2}/This will match exactly 2 characters./[a-z]{1,3}/This will match 1 to 3 characters. At least 1, at most 3 characters./^\D{20,40}$/This will match a minimum of 20 characters, with a max of 40.\b[^aeiouy\s]+\bThis will match words that do not contain a, e, i, o, u, or y./\D+[!]{3,10}$/Match a bunch of words followed by 3 to 10 exclamation marksSection 4 - Multi-line StringsWant to look for all the birds in a block of text King penguin Emperor penguin Wandering albatross Arctic Tern Rockhopper Penguin Weddell seal NarwhalKing penguin\nEmperor penguin\nWandering albatross\nArctic Tern\nNarwhal\nRockhopper Penguin\nWeddell seal Has multiple lines separated by newline characters Each animal name is 1-2 words All animal names have mixed casing /penguin/iMatches the first occurrence of “penguin”/penguin/igThe ‘g’ at the end is a global modifier. This will match “penguin” as manytimes as possible with global modifier (find all matches rather than stoppingafter the first match)./\w+\spenguin/igThis will match words followed by a space and the word ‘penguin’./^\w+\spenguin$/igAdding anchors to our regex causes us to no longer match. This is because itchecks from the very beginning to the end of the subject, which is the entiremulti-line string. We need it to check on each individual line. We can add an‘m’ modifier (multi-line modifier) to match string on individual lines./^\w+\spenguin$/migThis causes the anchors to apply to the beginning or end of every single line./^\w+\s(penguin|albatross)$/migThis now causes the pattern to match the entries that end with ‘albatross’./^\w+\s(penguin|albatross|tern)$/migNow it matches strings that end with ‘tern’./^\-?\d{1,3}\.\d+$/mgThis matches an optional ‘-’ characters, 1-3 digits, followed by a literal ‘.’character, followed by one or more digits. The ‘m’ modifier makes it supportmulti-line, and the ‘g’ modifier makes it apply globally.Section 5 - Capture GroupsEach shipmate has entered their address. 1 Reindeer Lane, North Pole, AK 99705 120 East 4th Street, Juneau, AK 99705It would be great to break apart each part of the address to store separately.Number and Street Name\d+\s[\w\s]+\w{4,6},\s For street name we match any amount of numbers A space Then any word of any length including spaces (e.g. “Cherry Tree”) Then a word of 4-6 characters in length (e.g. “Lane, Ave, Road, etc) A comma And ending with a spaceCity[\w\s]+,\sAny number of characters followed by a comma and a spaceState\w{2}\s2-letter state followed by a spaceZip Code\d{5}5-digit ZIP codeFull Matching Pattern/^\d+\s[\w\s]+\w{4,6},\s[\w\s]+,\s\w{2}\s\d{5}$/igAdded anchors on each end, case insensitive (i modifier), and matching globally(more than once using ‘g’ modifier).Matching GroupsGroups are used to return values to you./(learnbydoing)/This matches the literal ‘learnbydoing’. By putting parentheses around literal,we are matching a group. We can also modify this to match ‘bydoing’ inaddition to ‘learnbydoing’./(learn(bydoing))/ learnbydoing bydoing /(learn((by)doing))/ learnbydoing bydoing byWe can match the house number and street by surrounding them with parentheses./^(\d+\s[\w\s]+\w{4,6}),\s[\w\s]+,\s\w{2}\s\d{5}$/igThis will make the first match group provide the house number and street name.We can then also match the city by adding parentheses around the city name./^(\d+\s[\w\s]+\w{4,6}),\s([\w\s]+),\s\w{2}\s\d{5}$/igThen we can also capture the state/^(\d+\s[\w\s]+\w{4,6}),\s([\w\s]+),\s(\w{2})\s\d{5}$/igAnd finally the 5 digit ZIP code/^(\d+\s[\w\s]+\w{4,6}),\s([\w\s]+),\s(\w{2})\s(\d{5})$/igWe now have all 4 groups returned to us. Now we need to restrict street namesto ‘Street’ or ‘Lane’./^(\d+\s[\w\s]+(street|lane)),\s([\w\s]+),\s(\w{2})\s(\d{5})$/igBut now we’re getting ‘street’ or ‘lane’ as a match group? How can we use thisgroup, but not have it be a capturing group?/^(\d+\s[\w\s]+(?:street|lane)),\s([\w\s]+),\s(\w{2})\s(\d{5})$/igBy placing a ‘?:’ inside of the beginning of the parentheses, it excludes thisfrom the matching groups.Soup to Bits: Breaking the Ice With Regular ExpressionsHTML5 Validation of Form Fields<form> <input type="”text”" pattern="’\(\d{3}\)\s\d{3}-\d{4}’" /> <input type="”submit”" /></form> https://regex101.com/ http://rubular.com//\(\d{3}\)\s\d{3}-\d{4}/ - Phone number patternRegex can be used with gsub method on Strings in Ruby. Can use #match to extractdata from a string.In Atom you can use COMMAND+F to find, enable the ‘.*’ mode to use a regexsearch in files."
} ,
{
"title" : "RubyGems",
"category" : "",
"tags" : "",
"url" : "/resources/cheat-sheets/rubygems/",
"date" : "",
"content" : "Back to Cheat Sheets# List installed Gemsgem list# Display Gem Filesystem Pathgem which rake# List all Gem Commandsgem help commands# Display all gems that need updatesgem outdated# Install a gemgem install rake# Uninstall a gemgem uninstall rake# Uninstall all gems (does not remove gems in 'global' RVM gemset)gem uninstall -a -d -x -I"
} ,
{
"title" : "RVM",
"category" : "",
"tags" : "",
"url" : "/resources/cheat-sheets/rvm/",
"date" : "",
"content" : "Back to Cheat Sheets# list all available Ruby versionsrvm list known# list installed ruby versionsrvm list# view requirements for installing certain ruby versionsrvm requirements# install ruby 1.9.2rvm install 1.9.2# use version of rubyrvm use 1.9.2# List Gem Setsrvm gemset list# Create New Gem Setrvm gemset create gemset_name# Use gemsetrvm gemset use gemset_name# Delete gemsetrvm gemset delete gemset_name# Create .rvmrc file for projectcd ~/projects/my_project/rvm --rvmrc --create 1.9.2@my_project"
} ,
{
} ,
{
} ,
{
"title" : "Tags",
"category" : "",
"tags" : "",
"url" : "/tags/",
"date" : "",
"content" : ""
} ,
{
"title" : "Vagrant",
"category" : "",
"tags" : "",
"url" : "/resources/cheat-sheets/vagrant/",
"date" : "",
"content" : "Back to Cheat Sheets# creates virtual machine (VM) for first time, or starts the VM if it's not runningvagrant up# view the status of the vagrant VM associated with the current projectvagrant status# shuts down VMvagrant halt# hibernate the VMvagrant suspend# take VM out of hibernationvagrant resume# outputs the status of all VMs for current uservagrant global-status# output the status of the VM for the current projectvagrant status# stop and delete VMvagrant destroy# reapply configuration from Vagrantfile, restart VMvagrant reload# connect to VM via SSHvagrant ssh"
} ,
{
"title" : "Blender Viewport Navigation",
"category" : "",
"tags" : "",
"url" : "/resources/notes/blender/viewport-navigation/",
"date" : "",
"content" : "Back to Blender Notes IndexThese notes are based on the Viewport Navigation video tutorial.Selecting Objects Left-click on an object to select it Left-click empty space to de-select Shift + Left-click will allow you to select multiple objects ‘A’ will select all Alt (or Option on Mac) + ‘A’ will de-select allHow to RotateThese icons in the upper-right corner of the 3D viewport can be used to rotateyour view. You can click on ‘Z’ to switch to a top-down view. You can click anddrag any of these to rotate the view exactly as you want it.Just the same, you can click and drag the middle-mouse button for the sameeffect.How to MoveIf you want to change the center point of focus, left-click and drag on theHand icon on the left side of the viewport.You can achieve the same affect if you hold down SHIFT and then middle-mouseclick and drag.You can click on an object, and then go to the ‘View’ menu inside of the viewport and click on ‘Frame Selected’ to bring the focus onto the selected object.Frame SelectionYou can quickly get the same affect by using the ‘.’ key on the number pad areaof your keyboard (if your keyboard has a number pad).Camera ViewYou can click on the icon of the Camera to jump to the perspective of your camera.You can also use the zero key (0) on your number pad to trigger this view.Grid ViewYou can click on the icon of the Grid to switch betweenOrthographic and Perspective modes.Zooming In and OutYou can zoom in and out by clicking on the magnifying glass icon, and draggingup or down.Alternatively, you can use the plus (+) and minus (-) keys on your num pad, oruse the scroll wheel on your mouse."
} ,
{
"title" : "Vim",
"category" : "",
"tags" : "",
"url" : "/resources/cheat-sheets/vim/",
"date" : "",
"content" : "Back to Cheat SheetsBasicsVim has 3 modes (Normal Mode, Insert Mode, Line Mode). When you enter into Vim,you’re automatically placed into Normal mode, also known as Command mode.From Normal mode, press ‘i’ to go into insert mode to enter text content. PressESC to go back into Normal mode.Press ‘:’ to go into Line mode from Normal mode. Press ESC to go back intoNormal mode. Use ‘q’ from line mode to quit Vim. If you have made changes to thecurrently loaded file, it will ask you to save changes. You can use ‘q!’ to quitwithout saving the changes. You can use ‘wq’ to save the changes and quit at thesame time.Normal ModeThis mode is intended mostly for navigating and using commands to modify thefile.NavigationThese are the same as using the arrow keys. j - move down a line k - move up a line l - move to the right h - move to the leftJump between words w - move to next word W - move to next word (ignore punctuation) b - move to previous word B - move to previoius word (ignore punctuation) {num}w - Moves forward {num} words {num}j - Moves {num} lines downJumping to lines gg - Jump to beginning of file G - Jump to end of file Line number followed by ‘gg’ or ‘G’ (e.g. ‘22g’ or ‘22G’)Jumping on a line 0 - Move to beginning of a line ^ - Move to beginning of a line $ - Move to end of a line f{char} - Place cursor in first occurance of {char} to the right F{char} - Place cursor in first occurance of {char} to the left t{char} - Place cursor one character before the first occurance of {char} to the right T{char} - Place cursor one character before the first occurance of {char} to the left ; - Repeats the last motionThese are the same as using Page-Up and Page-Down keys, if present. CTRL-F - move forward / page down CTRL-B - move backwards / page upJumping Sentences / Paragraphs ( - Move to the next sentence ) - Move to the previous sentence { - Move to the next paragraph } - Move to the previous paragraphStatus Line CTRL-g - display line with current status (file name, status, cursor position,total lines) g + CTRL-g - display line with columns, lines, words, and bytesShift viewport z-ENTER - Moves text at current cursor upModification x - Delete (Cut) character under cursor X - Delete (Cut) character before cursor (to the left) {num}x - Deletes {num} characters dw - Delete (Cut) current word dd - Delete current line dl - Deletes the current character dh - Deletes character before cursor (to the left) dj - Deletes current line, and the one below it dk - Deletes current line, and the one above it d0 - Deletes from cursor to beginning of line d\$ - Deletes from cursor to end of the line dG - Deletes everything from the current line till the end of the file D - Deletes from cursor to end of the line y - Yank (copy) current character yy - Yank (copy) current line yw - Yank (copy) current word y0 - Yank (copy) from cursor to beginning of line y\$ - Yank (copy) from cursor to end of the line p - Paste last deleted/yanked content after cursor P - Paste last deleted/yanked content before cursor u - Undo CTRL-R - Redo . - Issues the previous commandLine ModeThese commands begin with ‘:’ to go into line mode, followed by ENTER. wq - write quit q - quit q! - quit without saving 0-100000000 - moves to line number specified set ruler - display rulerIf you type a partial command, then press CTRL-D, you’ll be shown a list ofsuggested commands.HelpYou can access the help by running ‘help’ or ‘h’ from the line mode.You can also try to jump to a specific topic by typing a blurb after thiscommand (e.g. ‘:h dd’ jumps to ‘dd’ command).If you place your cursor over a help topic, you can press CTRL + ] to go intothat topic. Press CTRL + o to go back to your previous location.SearchingTaken from Finding a Word in Vi/Vim.To find a word in Vi/Vim, simply type the / or ? key, followed by the wordyou’re searching for.Once found, you can press the n key to go directly to the next occurrence of theword.Vi/Vim also allows you to launch a search on the word over which your cursor ispositioned. To do this, place the cursor over the term, and then press * or #to look it up."
} ,
{
"title" : "Build a YouTube Clone Application Using React",
"category" : "",
"tags" : "",
"url" : "/resources/notes/react/youtube-clone/",
"date" : "",
"content" : "Notes from Build a YouTube Clone Application Using ReactRepository with code available at redconfetti/react-youtube-cloneThe following notes are for Mac users. You’ll need to use some commands specificto your system as needed.SetupVSCodeThis course uses VSCode to demonstrate all examples. You can enable the terminalwithin VSCode you can use CTRL + ` (backtick), or use the menu - View > Terminal.Create-React-AppYou need to have the create-react-app tool installed to use from the commandline, which requires that you have NodeJS installed.$ npm i -g create-react-app$ which create-react-app/usr/local/bin/create-react-app$ cd Projects$ mkdir youtube-api$ cd youtube-api$ create-react-app ./$ npm installInstall DependenciesOnce the script finishes, we’ll install our dependencies.npm install --save axios @material-ui/coreMaterial UIMaterial-UI provides pre-styled React components for you to use, similar tohow Bootstrap provides UI elements out of the box for new projects. It followsthe patterns of [Material Design].It has a container system, grid system, buttons, etc.Start the Dev Servernpm startRemove Source FolderWe’re going to start over by removing the src folder and recreate it withour files from scratch.rm -rf srcmkdir srccd srctouch index.jstouch App.jsmkdir componentsmkdir apiCreate Index and App// src/index.jsimport React from "react"import ReactDOM from "react-dom"import App from "./App"ReactDOM.render(<App />, document.querySelector("#root"))// src/app.jsimport React from "react"class App extends React.Component { render() { return <h1>YouTube Clone App</h1> }}export default AppHere we create our App class as a Class based component. Another type ofcomponent you can create is a functional component.Developers use class based components usually if there is any complexity tothe component (“smart components”). Class components support lifecycle methods,and can manage the state.Functional components, also known as “Dummy components”, are basic JavaScriptfunctions that process the input and return the component to be rendered.The above class component could be rewritten using the code below, howeverthis won’t have the same level of support as a class based component.const App = () => { return <h1>YouTube Clone App</h1>}App Binding to Root ElementIn our public/index.html file you’ll see that in the body of the page thereis a DIV defined with id of “root”. All of our application will be renderedwithin this root division.<!DOCTYPE html><html lang="en"> <head> <!-- ... --> </head> <body> <noscript>You need to enable JavaScript to run this app.</noscript> <div id="root"></div> </body></html>There is no need to modify this HTML file from this point forward. Everythingwill be defined within the src folder moving forward.API AccessUnder the ‘src/api’ folder, create a file called youtube.js.This is where we’re going to define our function that gets data from theYouTube API. You will need a Google Account to access the API console,from which you will obtain an API key.You’ll have to setup a new project, then choose ‘Library’ and search for theYouTube Data API v3. Choose ‘Enable’ and proceed to setup the credential. Makesure to choose that you’ll be using it from a Web browser (JavaScript),and that it will be accessing ‘Public data’.We’re using Axios here to configure the API settings which include the keyprovided by Google to access the YouTube Data API v3.// src/api/youtube.jsexport default axios.create({ baseURL: "https://www.googleapis.com/youtube/v3", params: { part: "snippet", maxResults: 5, key: "abcdEFGHijklmNOPQrstuvWXYZabcdEFGHdPpLk" }})The Basics of our ApplicationLet’s import the Grid component that we’re going to use from Material-UI Core,and also our YouTube API request object.// src/App.jsimport React from "react"import { Grid } from "@material-ui/core"import youtube from "./api/youtube"class App extends React.Component { render() { return <h1>YouTube Clone App</h1> }}export default AppNext we’re going to update our App component so that it uses the Grid container. Material-UI GridHere you see we’ve created our main container using the full 16 spaces. Insideof it we’ve created an item using only 12 spaces. This establishes the mainarea where content shows with a margin of 2 spaces on the left and right side.Within this there is yet another container defined to represent our main space.It has the search bar at the top using 12 spaces, with the video details andvideo list items displayed beneath it.// src/App.jsimport React from "react"import { Grid } from "@material-ui/core"import youtube from "./api/youtube"class App extends React.Component { render() { return ( <Grid justify="center" container spacing={10}> <Grid item xs={12}> <Grid container spacing={10}> <Grid item xs={12}> {/* SEARCH BAR */} </Grid> <Grid item xs={8}> {/* VIDEO DETAILS*/} </Grid> <Grid item xs={4}> {/* VIDEO LIST */} </Grid> </Grid> </Grid> </Grid> ) }}export default AppNote: We’re using inline CSS for this tutorial. This obviously isn’t recommended forreal projects, but works for this demonstration.Our ComponentsLet’s add an import statement for the components we’re about to create.import SearchBar from "./components/SearchBar"import VideoDetail from "./components/VideoDetail"// import VideoList from "./components/VideoList"mkdir -p src/componentstouch src/components/SearchBar.jstouch src/components/VideoList.jstouch src/components/VideoDetail.jsSearch Bar ComponentWe’re using a class based component because the state will be used.// src/components/SearchBar.jsimport React from "react"class SearchBar extends React.Component { state = { searchTerm: "" } render() { return <h1>This is a search bar</h1> }}export default SearchBarVideo Detail Component// src/components/VideoDetail.jsimport React from "react"const VideoDetail = () => { return <h1>This is a Video Detail component</h1>}export default VideoDetailNow that we’ve established the basic boilerplate for these components, let’s addthem to our App.js. Because we’re not putting anything within these components,we add them using the self-closing XML syntax (<SearchBar />, <VideoDetail />).// src/App.jsimport React from "react"import { Grid } from "@material-ui/core"import youtube from "./api/youtube"class App extends React.Component { render() { return ( <Grid justify="center" container spacing={16}> <Grid item xs={12}> <Grid container spacing={16}> <Grid item xs={12}> <SearchBar /> </Grid> <Grid item xs={8}> <VideoDetail /> </Grid> <Grid item xs={4}> {/* VIDEO LIST */} </Grid> </Grid> </Grid> </Grid> ) }}export default AppCreating a Component IndexIf you’d like to define your components separately, but import them all at once,you can create an index.js file in src/components that exports themindividually.// src/components/index.jsexport { default as SearchBar } from "./SearchBar"export { default as VideoDetail } from "./VideoDetail"Now we can redefine our import in src/App.js like so:import { SearchBar, VideoDetail } from "./components"Building the Search BarLet’s import the components we’re going to use from the Material-UI library.// src/components/SearchBar.jsimport React from "react"import { Paper, TextField } from "@material-ui/core"class SearchBar extends React.Component { state = { searchTerm: "" } render() { return ( <Paper elevation={6} style={{ padding: "25px" }}> <form> <TextField fullWidth label="Search..."></TextField> </form> </Paper> ) }}export default SearchBarIf you check in your browser, you’ll have a nice long search bar at the top.Search Bar Event HandlersNow we want to add an event handler to the form that is executed when the searchis submitted (<form onSubmit={this.handleSubmit}>).// src/components/SearchBar.jsimport React from "react"import { Paper, TextField } from "@material-ui/core"class SearchBar extends React.Component { state = { searchTerm: "" } render() { return ( <Paper elevation={6} style={{ padding: "25px" }}> <form onSubmit={this.handleSubmit}> <TextField fullWidth label="Search..."></TextField> </form> </Paper> ) }}export default SearchBarWe can also add an ‘onChange’ method to the TextField. This will handle inputchanges to the text field.// src/components/SearchBar.jsimport React from "react"import { Paper, TextField } from "@material-ui/core"class SearchBar extends React.Component { state = { searchTerm: "" } render() { return ( <Paper elevation={6} style={{ padding: "25px" }}> <form onSubmit={this.handleSubmit}> <TextField fullWidth label="Search..." onChange={this.handleChange} ></TextField> </form> </Paper> ) }}export default SearchBarBinding our Event Handling FunctionsWithin the React Docs for Handling Events the example shows a functiondeclared within the class as per the normal method.When a normal function is declared like this, the function has it’s own scopewhere this refers to the function itself. This is why there is a call to bindthe class to this within the constructor.class Toggle extends React.Component { constructor(props) { super(props) this.state = { isToggleOn: true } // This binding is necessary to make `this` work in the callback this.handleClick = this.handleClick.bind(this) } handleClick() { this.setState(state => ({ isToggleOn: !state.isToggleOn })) } render() { return ( <button onClick={this.handleClick}> {this.state.isToggleOn ? "ON" : "OFF"} </button> ) }}There is a simple work-around to this issue. You can simply declare the functionusing an arrow-function, as they do not have their own this defined in theirscope.handleChange = event => { this.setState({ searchTerm: event.target.value })}We can also write this in a single line.handleChange = event => this.setState({ searchTerm: event.target.value })Now we can also add our handleSubmit function also. As you can see thismakes use of the destructuring assignment syntax added by ES6 to definea constant called searchTerm from the searchTerm property of this.state.// src/components/SearchBar.jsimport React from "react"import { Paper, TextField } from "@material-ui/core"class SearchBar extends React.Component { state = { searchTerm: "" } handleChange = event => this.setState({ searchTerm: event.target.value }) handleSubmit = () => { const { searchTerm } = this.state } render() { return ( <Paper elevation={6} style={{ padding: "25px" }}> <form onSubmit={this.handleSubmit}> <TextField fullWidth label="Search..." onChange={this.handleChange} ></TextField> </form> </Paper> ) }}export default SearchBarTo make the searchTerm string available to other components, we need to pass ina function via a prop called onFormSubmit.// src/App.js// ...<SearchBar onFormSubmit={this.handleSubmit} />// ...Within our SearchBar components we can then update our handleSubmit functionso that it is able to call this function.// src/components/SearchBar.jsimport React from "react"import { Paper, TextField } from "@material-ui/core"class SearchBar extends React.Component { state = { searchTerm: "" } handleChange = event => this.setState({ searchTerm: event.target.value }) handleSubmit = event => { const { searchTerm } = this.state const { onFormSubmit } = this.props onFormSubmit(searchTerm) event.preventDefault() } render() { return ( <Paper elevation={6} style={{ padding: "25px" }}> <form onSubmit={this.handleSubmit}> <TextField fullWidth label="Search..." onChange={this.handleChange} ></TextField> </form> </Paper> ) }}export default SearchBarSo now the SearchBar component will run the method we inject into it and passit the searchTerm.We’ve also updated the handleSubmit function so that it receives the eventand makes a call to event.preventDefault() to stop the form submit fromrefreshing the page.Next we’re going to define our handleSubmit method that we’re passing intothe SearchBar component within our App.js.As you can see we’re using the async keyword before our function, and theawait keyword before the call to the YouTube API call.This is a new feature of ES2017 that makes it possible to make asynchronouscalls using a standard synchronous functional definition instead of having torely on promise chain. MDN web docs - async function How to use Async Await in JavaScript// src/App.jsimport React from "react"import { Grid } from "@material-ui/core"import { SearchBar, VideoDetail } from "./components"import youtube from "./api/youtube"class App extends React.Component { handleSubmit = async searchTerm => { const response = await youtube.get("search", { params: { q: searchTerm } }) console.log(response) } render() { return ( <Grid justify="center" container spacing={10}> <Grid item xs={12}> <Grid container spacing={10}> <Grid item xs={12}> <SearchBar onFormSubmit={this.handleSubmit} /> </Grid> <Grid item xs={8}> <VideoDetail /> </Grid> <Grid item xs={4}> {/* VIDEO LIST */} </Grid> </Grid> </Grid> </Grid> ) }}export default AppFixing Axios CallIt turns out that this doesn’t work, we end up getting an HTTP 400 error fromthe YouTube API because Axios is not passing our default parameters.First let’s cut those params from our youtube.js file.// src/api/youtube.jsimport axios from "axios"export default axios.create({ baseURL: "https://www.googleapis.com/youtube/v3"})And then paste them into the call from src/App.js.handleSubmit = async searchTerm => { const response = await youtube.get("search", { params: { part: "snippet", maxResults: 5, key: "[Api Key]", q: searchTerm } }) console.log(response)}If you look at the console you’ll see that the ‘data’ object in the API responsecontains the ‘items’ returned by the search. We can narrow down our consolelog statement to this.handleSubmit = async searchTerm => { const response = await youtube.get("search", { params: { part: "snippet", maxResults: 5, key: "[Api Key]", q: searchTerm } }) console.log(response.data.items)}Now we have all the data we need to create our YouTube video list.Displaying Search ResultsAdding results to stateWithin our App.js, we can now establish the definition of the default statewithin our App class, and we can also use this.setState() within ourhandleSubmit function so that it sets the ‘videos’ to equal the YouTubesearch results we obtained, and it sets the ‘selectedVideo’ to the first videoin the search results collection.// src/App.js// ...class App extends React.Component { state = { videos: [], selectedVideo: null } handleSubmit = async searchTerm => { const response = await youtube.get("search", { params: { part: "snippet", maxResults: 5, key: "[App Key]", q: searchTerm } }) this.setState({ videos: response.data.items, selectedVideo: response.data.items[0] }) } // render (){ ... }}// ...Populating Video DetailNow we can pass the selectedVideo information to the VideoDetail component.// src/App.js// ...class App extends React.Component { // state = ... // handleSubmit = ... render() { const { selectedVideo } = this.state return ( <Grid justify="center" container spacing={10}> <Grid item xs={12}> <Grid container spacing={10}> <Grid item xs={12}> <SearchBar onFormSubmit={this.handleSubmit} /> </Grid> <Grid item xs={8}> <VideoDetail video={selectedVideo} /> </Grid> <Grid item xs={4}> {/* VIDEO LIST */} </Grid> </Grid> </Grid> </Grid> ) }}// ...Now we can open the VideoDetail component and build it out to use the objectwe’re passing via the props.First off we added an import of Paper and Typography from the Material-UIlibrary. With a function based components the props are passed as the firstargument to the function, so we’re using variable destructuring to bring inonly the ‘video’ property from the props passed in.We’re using a React.Fragment wrapper to wrap the two Paper component.// src/components/VideoDetail.jsimport React from "react"import { Paper, Typography } from "@material-ui/core"const VideoDetail = ({ video }) => { return ( <React.Fragment> <Paper elevation={6} style={{ height: "70%" }}></Paper> <Paper elevation={6} style={{ padding: "15px" }}></Paper> </React.Fragment> )}export default VideoDetailWithin the first Paper element, we’re going to place an iframe that will displaythe video.Note that the videoSrc constant we define uses backticks around the stringso that the interpolation of the videoId is supported.// src/components/VideoDetail.jsimport React from "react"import { Paper, Typography } from "@material-ui/core"const VideoDetail = ({ video }) => { if (!video) return <div>Loading...</div> const videoSrc = `https://www.youtube.com/embed/${video.id.videoId}` return ( <React.Fragment> <Paper elevation={6} style={{ height: "70%" }}> <iframe frameBorder="0" height="100%" width="100%" title="Video Player" src={videoSrc} /> </Paper> <Paper elevation={6} style={{ padding: "15px" }}></Paper> </React.Fragment> )}export default VideoDetailNow if you go check the app by doing a search, the video will load.Video TextBelow the video we want to display the information about the video. TheTypography component can support paragraphs, headers, etc.// src/components/VideoDetail.jsimport React from "react"import { Paper, Typography } from "@material-ui/core"const VideoDetail = ({ video }) => { if (!video) return <div>Loading...</div> const videoSrc = `https://www.youtube.com/embed/${video.id.videoId}` return ( <React.Fragment> <Paper elevation={6} style={{ height: "70%" }}> <iframe frameBorder="0" height="100%" width="100%" title="Video Player" src={videoSrc} /> </Paper> <Paper elevation={6} style={{ padding: "15px" }}> <Typography variant="h4"> {video.snippet.title} - {video.snippet.channelTitle} </Typography> <Typography variant="subtitle1"> {video.snippet.channelTitle} </Typography> <Typography variant="subtitle2">{video.snippet.description}</Typography> </Paper> </React.Fragment> )}export default VideoDetailThere we have our video detail component, let’s focus on the video list.Video ListLet’s start by creating a new component to render the video items in the list,which we will call VideoItem.// src/components/VideoItem.jsimport React from "react"import { Grid, Paper, Typography } from "@material-ui/core"const VideoItem = () => { return <h1>Video Item</h1>}export default VideoItem// src/components/VideoList.jsimport React from "react"import { Grid } from "@material-ui/core"const VideoList = () => { return <h1>VideoList</h1>}export default VideoList// src/components/index.jsexport { default as SearchBar } from "./SearchBar"export { default as VideoDetail } from "./VideoDetail"export { default as VideoList } from "./VideoList"After establishing these, we’ll a the VideoList component into the importstatement within App.js, and update the remaining empty Grid item so that itcontains <VideoList />>.// src/App.jsimport React from "react"import { Grid } from "@material-ui/core"import { SearchBar, VideoDetail, VideoList } from "./components"import youtube from "./api/youtube"class App extends React.Component { // ... render() { const { selectedVideo } = this.state return ( <Grid justify="center" container spacing={10}> <Grid item xs={12}> <Grid container spacing={10}> <Grid item xs={12}> <SearchBar onFormSubmit={this.handleSubmit} /> </Grid> <Grid item xs={8}> <VideoDetail video={selectedVideo} /> </Grid> <Grid item xs={4}> <VideoList />> </Grid> </Grid> </Grid> </Grid> ) }}Now our VideoList component is rendering.Iterating Over ItemsWithin our VideoList we need to iterate over the list of items obtained fromthe YouTube API and render a separate VideoItem for each one.First let’s pass our list of videos to the VideoList component via the props.We’ll do this by destructuring this.state within the render() functionso that both selectedVideo and videos are both present. We’ll thenpass videos to the VideoList component.// src/App.js// ...class App extends React.Component { // ... render() { const { selectedVideo, videos } = this.state return ( <Grid justify="center" container spacing={10}> <Grid item xs={12}> <Grid container spacing={10}> <Grid item xs={12}> <SearchBar onFormSubmit={this.handleSubmit} /> </Grid> <Grid item xs={8}> <VideoDetail video={selectedVideo} /> </Grid> <Grid item xs={4}> <VideoList videos={videos} /> </Grid> </Grid> </Grid> </Grid> ) }}Now within the VideoList, we’ll destucture the videos property from props,and then use the Array.map() method to generate an array of VideoItemcomponents that represent each item. This will require that we also importour VideoItem component within VideoList.js.// src/components/VideoList.jsimport React from "react"import { Grid } from "@material-ui/core"import VideoItem from "./VideoItem"const VideoList = ({ videos }) => { const listOfVideos = videos.map(video => <VideoItem />) return listOfVideos}export default VideoListYou go to the browser and test your app now, a search should result in‘Video Item’ shown 5 times on the right side of the screen.Do remember though that when you’re mapping over a collection you need toprovide a unique key for each item. The Array.map function will provide anindex integer as the second argument so we can simply use that.We’ve also added the video itself as another prop.const listOfVideos = videos.map((video, id) => ( <VideoItem key={id} video={video} />))Expanding the Video ItemsWithin the VideoItem component,// src/components/VideoItem.jsimport React from "react"import { Grid, Paper, Typography } from "@material-ui/core"const VideoItem = ({ video }) => { return ( <Grid item xs={12}> <Paper style={{ display: "flex", alignItems: "center" }}> <img style={{ marginRight: "20px" }} alt="thumbnail" src={video.snippet.thumbnails.medium.url} /> <Typography variant="subtitle1"> <b>{video.snippet.title}</b> </Typography> </Paper> </Grid> )}export default VideoItemThis displays the medium thumbnail images for each item, with the title shownto the right of each thumbnail.Further StylingLet’s return the VideoList within a Grid container.// src/components/VideoList.jsimport React from "react"import { Grid } from "@material-ui/core"import VideoItem from "./VideoItem"const VideoList = ({ videos }) => { const listOfVideos = videos.map((video, id) => ( <VideoItem key={id} video={video} /> )) return ( <Grid container spacing={10}> {listOfVideos} </Grid> )}export default VideoListThis adds a bit of spacing to our items displayed in the list.Linking Items to the Selected VideoNow we’re going to make it so that when someone selects a video from the listit loads in the selectedVideo state in our App component.This requires that we simply pass a function to the VideoItem component namedas the onVideoSelect prop, and then make that a prop that VideoList receivesas a prop from App.js.// src/components/VideoList.jsimport React from "react"import { Grid } from "@material-ui/core"import VideoItem from "./VideoItem"const VideoList = ({ videos, onVideoSelect }) => { const listOfVideos = videos.map((video, id) => ( <VideoItem onVideoSelect={onVideoSelect} key={id} video={video} /> )) return ( <Grid container spacing={10}> {listOfVideos} </Grid> )}export default VideoListNow let’s define this function in App.js.// src/App.js// ...class App extends React.Component { // ... onVideoSelect = video => { this.setState({ selectedVideo: video }) } render() { const { selectedVideo, videos } = this.state return ( <Grid justify="center" container spacing={10}> <Grid item xs={12}> <Grid container spacing={10}> <Grid item xs={12}> <SearchBar onFormSubmit={this.handleSubmit} /> </Grid> <Grid item xs={8}> <VideoDetail video={selectedVideo} /> </Grid> <Grid item xs={4}> <VideoList videos={videos} onVideoSelect={this.onVideoSelect} /> </Grid> </Grid> </Grid> </Grid> ) }}Now lastly we need to bind this prop to the onClick event. You’ll see thatwe’ve added onVideoSelect as a property being destructured from the propsargument passed to our component. We’re also bound our click event to thePaper element that displays the thumbnail.// src/components/VideoItem.jsimport React from "react"import { Grid, Paper, Typography } from "@material-ui/core"const VideoItem = ({ video, onVideoSelect }) => { return ( <Grid item xs={12}> <Paper style={{ display: "flex", alignItems: "center" }} onClick={() => onVideoSelect(video)} > <img style={{ marginRight: "20px" }} alt="thumbnail" src={video.snippet.thumbnails.medium.url} /> <Typography variant="subtitle1"> <b>{video.snippet.title}</b> </Typography> </Paper> </Grid> )}export default VideoItemAnd now we’re done."
} ,
{
} ,
{
"title" : "NodeJS",
"category" : "",
"tags" : "",
"url" : "/resources/notes/node/nodejs/",
"date" : "",
"content" : "NodeJShttps://www.codeschool.com/courses/real-time-web-with-node-jsIntro to NodeJSAllows you to build scalable network applications using JavaScript on the server-side. Uses V8 JavaScript Runtime, which powers Chrome / Chromium. NodeJS is a wrapper for this engine.What can you build with NodeJS? Websocket Server (e.g. Chat Room) Fast File Upload client Ad Server Any Real-Time Data AppsNodeJS is not a framework, it is very low level, and it is not multi-threaded. It is a single threaded server.Blocking vs Non-Blocking CodeWith blocking code the previous command must be completed before the next command can run. Non-blocking code is structured so that it uses callbacks to determine the next step after something is completed. This allows the code to continue running if one item is not yet complete.NodeJS is single threaded, however it’s able to run JavaScript Asynchronously. It is build upon libuv, a cross-platform library that abstracts apis/syscalls for asynchronous (non-blocking) input/output provided by the supported OSes (Unix, OS X and Windows).In this model, open/read/write operation on devices and resources (sockets, file system, etc) managed by the file-system do not block the calling thread like they do with C programs. They mark the process to be notified when new data or events are available.Node uses an event loop to allow this, which invokes the next callback/function that was scheduled for execution. “Everything runs in parallel except your code”, meaning that node allows your code to handle requests from hundreds and thousands of open sockets with a single thread concurrently by multiplexing and sequencing all your js logic in a single stream of execution.var http = require('http');http.createServer(function(request, response) { response.writeHead(200); // status code in header response.write("Hello, this is dog."); // response body response.end(); // close the connection}).listen(8080); // listen for connections on this portThis code uses an event loop that checks for an HTTP request until one is encountered.Why JavaScript?“Javascript has certain characteristics that make it very different than other dynamic languages, namely that it has no concept of threads. Its model of concurrency is completely based around events.” - Ryan DahlKnown Events: request connection closeWhen these events are encountered, the associated callback functions are called. You could consider these callback functions our Event Queue.var http = require('http');http.createServer(function(request, response) { response.writeHead(200); // status code in header response.write("Hello, this is dog."); // response body setTimeout(function() { // represents long running process response.write("Dog is done."); response.end(); }, 5000); // 5000ms = 5 seconds}).listen(8080); // listen for connections on this port ## Blocking Call to the File Systemvar fs = require('fs');var contents = fs.readFileSync('index.html');console.log(contents); ## Non-Blocking Call to the File Systemvar fs = require('fs');fs.readFile('index.html', function(error, contents){ console.log(contents); }); ## Read File and Serve as HTMLvar http = require('http');var fs = require('fs');http.createServer(function(request, response) { response.writeHead(200, {'Content-Type': 'text/html'}); fs.readFile('index.html', function(error, contents) { response.write(contents); response.end(); });}).listen(8080); # Events - Level TwoThe DOM triggers events such as click, hover, or submit. You can register callbacks via jQuery when these events occur.Many objects in NodeJS emit events. The net.Server object inherits from the EventEmitter class, and emits the ‘request’ event. fs.readStream also inherits from EventEmitter, and emits the ‘data’ event as data is read from the file.Custom Event EmittersWe can register our own emitter to do something like log errors, warnings, or info events.var EventEmitter = require('events').EventEmitter;var logger = new EventEmitter(); logger.on('error', function(message) { console.log('ERR: ' + message);});logger.emit('error', 'Spilled Milk');logger.emit('error', 'Eggs Cracked');// Chat Emittervar events = require('events');var EventEmitter = events.EventEmitter;var chat = new EventEmitter();chat.on('message', function(message){ console.log(message);});// emit call to 'message' callbackchat.emit('message', 'hello, how are you doing'); ## Multiple EventsIt is possible to register multiple callbacks for a single request emitter. When the request is received, the response will be sent to the requestor, and the console log will occur.var http = require('http');var server = http.createServer();server.on('request', function(request, response) { response.writeHead(200); response.write("Hello, this is dog"); response.end();});server.on('request', function(request, response) { console.log('New request coming in...');});server.listen(8080);# terminal 1$ node server.jsNew request coming in...# terminal 2$ curl http://localhost:8080/Hello, this is dog ## HTTP Echo ServerIn the last lesson, we registered an event for http server to respond to requests, using the request event callback.http.createServer returns a new web server object. It allows you to pass a requestListener function, which it uses to respond to requests.http.createServer(function(request, response) { … });The ‘request’ event makes a call to this function, passing the two parameters into the function.Alternative SyntaxThis alternative syntax is typically how you add event listeners in Node.// create server with no parametersvar server = http.createServer();// register request event callbackserver.on('request', function(request, response) { … });// register event callback when server is closedserver.on('close', function() { … }); # Streams - Level 3For efficiency, we need to be able to access data chunk-by-chunk, piece-by-piece, and sending the data as it receives each chunk. By processing and sending each chunk, less memory is used.Streams can be readable, writeable, or both. The API described here is for streams in Node version 0.10.x (the streams2 API).With our server example, the request is a readable stream, and the response is a writable stream.http.createServer(function(request, response) { response.writeHead(200); response.write("<p>Dog is running.</p>"); setTimeout(function(){ response.write("<p>Dog is done.</p>"); response.end(); }, 5000);}).listen(8080);The browser immediately sends the response with the first write to the response stream. 5 seconds later we write the “Dog is done” string to the response, then close the response object, ending the stream.How might we read from the request? The request object is a readable stream, which also inherits from EventEmitter. It can communicate with other objects through events, such as ‘readable’ which is fired when the object is ready to consume data, and the ‘end’ event which is fired when it is done.http.createServer(function(request, response) { response.writeHead(200); request.on('readable', function() { var chunk = null; while ( null !== (chunk = request.read()) ) { console.log(chunk.toString()); } }); request.on('end', function() { response.end(); });}).listen(8080);We have to use toString() to convert the chunk, because it provides a buffer where binary data might be present.In this case we’re logging the data we receive from the client to the console. Instead of doing this we can provide the same data back to the client that we’ve received.http.createServer(function(request, response) { response.writeHead(200); request.on('readable', function() { var chunk = null; while ( null !== (chunk = request.read()) ) { response.write(chunk); } }); request.on('end', function() { response.end(); });}).listen(8080);In this scenario, response.write converts the chunk to a string for us. When we want to redirect the request stream back to the response stream we can instead use request.pipe(response). This same code above can be refactored with the following:http.createServer(function(request, response) { response.writeHead(200); request.pipe(response);}).listen(8080);$ curl -d 'hello' http://localhost:8080HelloThis is similar to the pipe operator used on the Bash command line, used to stream the output from one operation into the next one.When you can, use pipe instead of listening to the readable event and manually reading the chunks. This helps protect your application from future breaking changes to the streams API provided by Node, which may change in the future given that NodeJS is still very young (v0.10.0).In the NodeJS documentation, it is reported how stable the API is, meaning how likely it is to change and thus break your existing functionality that depends on the API.For instance, the File System has a Stability rating of 3 - Stable, with the Stream API rated as 2 - Unstable.Reading and Writing a Filevar fs = require('fs');var file = fs.createReadStream("readme.md");var newFile = fs.createWriteStream("readme_copy.md");file.pipe(newFile);Streaming is so powerful, so simple to use with the pipe function, that there are 3rd party libraries that depend on it. For instance the Gulp.js build system, which exposes the pipe function as its public API so you can do many sorts of manipulations on its assets with very few lines of code.var fs = require('fs');var http = require('http');http.createServer(function(request, response) { var newFile = fs.createWriteStream("readme_copy.md"); request.pipe(newFile); request.on('end', function() { response.end('uploaded!'); });}).listen(8080);$ curl --upload-file readme.md http://localhost:8080We can pipe any read stream into any write stream. In this example we can read from a request instead of from a file. We listen to the ‘end’ event for the request so that we can close the response stream once completed.We are streaming pieces of the file from the client to the server, then the server is streaming those pieces to the disk as they are being read from the request. Because Node is non-blocking, if we try to upload two files to the same server, Node can handle them simultaneously.Ryan Dahl created NodeJS was to deal with the issue of file uploads. Many web applications try to receive the entire file into memory before writing it to the disk, which can cause issues on the server side, which can also block other users of the same web application.It’s also not possible to provide file upload progress to the user as it’s being uploaded.File Uploading Progress$ curl --upload-file file.jpg http://localhost:8080progress: 3%progress: 6%…progress: 99%progress: 100%var fs = require('fs');var http = require('http');http.createServer(function(request, response) { var newFile = fs.createWriteStream("readme_copy.md"); var fileBytes = request.headers['content-length']; var uploadedBytes = 0; request.on('readable', function() { var chunk = null; while(null !== (chunk = request.read())) { uploadedBytes += chunk.length; var progress = (uploadedBytes / fileBytes) * 100; response.write("progress: " + parseInt(progress, 10) + "%\n"); } }); request.pipe(newFile); request.on('end', function() { response.end('uploaded!'); });}).listen(8080);Because the stream of request contents to the newFile object (writeable file stream) is being established immediately after the request object has registered the ‘readable’ function callback, the pipe is set up immediately to feed into the file. As soon as the request object is ready to read chunks of data from, it starts to process each chunk and provide the progress back to the client making the request.Output to Standard Outputvar fs = require('fs');var file = fs.createReadStream('fruits.txt');file.pipe(process.stdout); # Modules - Level 4We’ve loaded modules using ‘require’ in past lessons.var http = require('http');var fs = require('fs');- How does ‘require’ return the libraries?- How does it find these files?// custom_hello.jsvar hello = function() { console.log("hello!");}module.exports = hello;// custom_goodbye.jsexports.goodbye = function() { console.log("bye!");}// app.jsvar hello = require('./custom_hello');Var gb = require('./custom_goodbye');hello();gb.goodbye();With our hello module, we’re only making a single method public by assigning it to module.exports. With the goodbye module we could assign more than just this single function to the module.We can optionally require the module and then call the method directly.require('./custom_goodbye').goodbye(); ## Export Multiple Functions// my_module.jsvar foo = function() { … }var bar = function() { … }var baz = function() { … }exports.foo = fooexports.bar = bar// app.jsvar myMod = require('./my_module');myMod.foo();myMod.bar();Because we did not export the ‘baz’ function, it is private, and only accessible from within the module.Making HTTP Requests// app.jsvar http = require('http');var message = "Here's looking at you, kid.";var options = { host: 'localhost', port: 8080, path: '/', method: 'POST'}var request = http.request(options, function(response) { response.on('data', function(data) { console.log(data); // logs response body });});request.write(message); // begins requestrequest.end(); // finishes request ## Encapsulating the FunctionWe can make this simpler by wrapping it in a function call.// make_request.jsvar http = require('http');var makeRequest = function(message) { var options = { host: 'localhost', port: 8080, path: '/', method: 'POST' } var request = http.request(options, function(response) { response.on('data', function(data) { console.log(data); // logs response body }); }); request.write(message); // begins request request.end(); // finishes request}module.exports = makeRequest;// app.jsvar makeRequest = require('./make_request');makeRequest("Here's looking at you, kid");makeRequest("Hello, this is dog"); ## Node Package Manager (NPM)Where does require look for modules?var make_request = require('./make_request'); // looks in same directoryvar make_request = require('../make_request'); // looks in parent directory// looks at absolute pathvar make_request = require('/Users/eric/nodes/make_request');// looks for it inside 'node_modules' directoryvar make_request = require('make_request');Looks for node_modules directory: In the current directory In the parent directory In the parent’s parent directory Etc.Each directory inside of ‘node_modules’ is a package that represents a module.Packages come from NPM (Node Package Manager).NPM comes with node, there is a module repository, and it handles dependencies automatically, and makes it easy to public modules.http://www.npmjs.org/Installing a NPM ModuleInside of /home/my_app:$ npm install requestThis will install the ‘request’ package inside of /home/my_app/node_modules/request// - /home/my_app/app.js// loads from local node_modules directoryvar request = require('request') ## Local vs GlobalSometimes you may want to install packages globally, instead of only within a specific application.$ npm install coffee-script -gThis package comes with an executable that we can use from the command line:$ coffee app.coffeeA globally installed NPM module cannot be required.// will not workvar coffee = require('coffee-script');We still have to install the coffee-script module locally for the application to require it into our program.Finding ModulesYou can find libraries that are useful in the NPM registry website, in Github, or you can search from the command line:$ npm search request ## Defining Your Dependencies# my_app/package.json{ "name": "My App", "version": "1", "dependencies": { "connect": "1.8.7" }}$ npm installWhen you get a node project, the node_modules folder won’t be present. You’ll have to run ‘npm install’ to install the needed packages.Inside of my_app/node_modules/connect, you’ll notice that each package has it’s own set of dependencies as well. NPM installs those dependencies as well.Semantic Versioning"``connect``"``: "``1.8.7``"Major version is 1, Minor is 8, Patch is 7.A major version change may completely change the API. A minor version is less likely, and a patch shouldn’t."connect": "~1" - Will fetch any version greater to or equal to 1.0.0, yet less than 2.0.0"connect": "~1.8" - Will fetch versions greater than 1.8.0, less than 1.9.0"connect": "~1.8.7" - Will fetch versions greater than 1.8.7, less than 1.9.0See http://semver.org/ for more informationExpress - Level 5Node is very low level. You’ll want to build a web framework if you’re working on a very large web application. Or you can use an existing framework such as Express.“Sinatra inspired web development framework for Node.js – insanely fast, flexible, and simple” Easy route URLs to callbacks Middleware (from Connect) Environment based configuration Redirection helpers File Uploads install module and add to package.json $ npm install –save express var express = require(‘express’); var app = express(); // configure root route app.get(‘/’, function(request, response) { // serve file from current directory response.sendFile(__dirname + “/index.html”); }); app.listen(8080); Express Routes We want to create an endpoint that receives a certain twitter users name, obtains that users tweets from Twitter, and returns them.// app.jsvar request = require('request');var url = require('url');app.get('/tweets/:username', function(req, response) { var username = req.params.username; options = { protocol: "http:", host: 'api.twitter.com', pathname: '/1/statuses/user_timeline.json', query: { screen_name: username, count: 10 } } var twitterUrl = url.format(options); request(twitterUrl).pipe(response);});The Twitter API requires users to authenticate, so there will be more code required to do this.$ curl -s http://localhost:8080/tweets/eallam/$ npm install prettyjson -g$ curl -s http://localhost:8080/tweets/eallam/ | prettyjson ## Express Templates$ npm install --save ejs// my_app/package.json"dependencies": { "express": "4.9.6", "ejs": "1.0.0"}// my_app/app.jsapp.get('/tweets/:username', function(req, response) { … request(url, function(err, res, body) { var tweets = JSON.parse(body); response.locals = {tweets: tweets, name: username}; response.render('tweets.ejs'); }}// my_app/views/tweets.ejs<h1>Tweets for @<%= name %></h1><ul> <% tweets.forEach(function(tweet) { %> <li><%= tweet.text %></li> <% }); %></ul> # Socket IO - Level 6Node works very well for providing real time communication, which is perfect for running a chat server.Typically the HTTP request/response cycle involves a request, the browser waits for a response, and then the server responds. That is the end of the connection.With Websockets, we can transmit information back and forth at the same time. This is known as a full duplex connection. We cannot rely on every web browser to support WebSockets, so we have to use a library as a fallback for when the browser does not support socket connections.$ npm install --save socket.io// app.jsvar express = require('express');var app = express();var server = require('http').createServer(app);var io = require('socket.io')(server);io.on('connection', function(client) { console.log('Client connected…');});app.get('/', function(req, res) { res.sendFile(__dirname + '/index.html');});server.listen(8080);Here we are passing the server object to the socket.io library so that it can use the server to listen for requests. We are registering a connection event with logger, and then also configuring a path for the request to be received by.Socket.io for Websockets<script src="/socket.io/socket.io.js"></script><script> var socket = io.connect('http://localhost:8080');</script> ## Sending Message to Client// app.jsio.on('connection', function(client) { console.log('Client connected…'); // emit the message event on the client client.emit('messages', { hello: 'world' });});<!-- index.html --><script src="/socket.io/socket.io.js"></script><script> var socket = io.connect('http://localhost:8080'); socket.on('messages', function(data) { alert(data.hello); });</script>$ node app.js Info - socket.io startedhttp://localhost:8080/Alert pops up with the hello in the browser, and the console logs that the client connected.Sending Messages to Server// app.jsio.on('connection', function(client) { client.on('messages', function(data) { console.log(data); });});<!-- index.html --><script> var socket = io.connect('http://localhost:8080'); $('#chat_form').submit(function(e) { var message = $('#chat_input').val(); socket.emit('messages', message); });</script>$ node app.js Info - socket.io started ## Broadcasting MessagesWe’re trying to create a chat room, not simply send and receive messages. Luckily there is a broadcast method supported by Socket.iosocket.broadcast.emit("message", 'Hello');This will send the message to all the other connected sockets (chat room clients).// app.jsio.on('connection', function(client) { client.on('messages', function(data) { client.broadcast.emit("messages", data); });});<!-- index.html --><script> var socket = io.connect('http://localhost:8080'); socket.on('messages', function(data) { insertMessage(data) });</script> ## Saving Data On The SocketWe don’t know who is who on this server, so we need to make some possible way of registering usernames.io.on('connection', function(client) { client.on('join', function(name) { client.nickname = name; // set the nickname associated with this client });});We’ll prompt the user for their nickname when they connect, and then we’ll emit that to the server via the ‘join’ event.<script> var server = io.connect('http://localhost:8080'); server.on('connect', function(data) { $('#status').html('Connected to chattr'); nickname = prompt("What is your nickname?"); server.emit('join', nickname); });</script>This ensures that the name is available both to the server, and to the client. Now we need to make the messages listener so that before we broadcast the message we get the nickname of the client and use that when emitting the message.// app.jsio.on('connection', function(client) { client.on('join', function(name) { client.nickname = name; }); client.on('messages', function(data) { var nickname = client.nickname; client.broadcast.emit("message", nickname + ": " + message); // broadcasts the name and message client.emit("messages", nickname + ": " + message); // sends the same message back to our own client });}); # Persisting Data - Level 7"
} ,
{
"title" : "Simple-Jekyll-Search",
"category" : "",
"tags" : "",
"url" : "/bower_components/simple-jekyll-search/",
"date" : "",
"content" : "Simple-Jekyll-Search====================[![Build Status](https://travis-ci.org/christian-fei/Simple-Jekyll-Search.svg?branch=master)](https://travis-ci.org/christian-fei/Simple-Jekyll-Search)A JavaScript library to add search functionality to any Jekyll blog.Find it on [npmjs.com](https://www.npmjs.com/package/simple-jekyll-search)---idea from this [blog post](https://alexpearce.me/2012/04/simple-jekyll-searching/#disqus_thread)---### Promotion: check out [Pomodoro.cc](https://pomodoro.cc/)# [Demo](http://christian-fei.github.io/Simple-Jekyll-Search/)# Install```bower install --save simple-jekyll-search# ornpm install --save simple-jekyll-search```# Getting startedPlace the following code in a file called `search.json` in the **root** of your Jekyll blog.This file will be used as a small data source to perform the searches on the client side:```------[ {% for post in site.posts %} { "title" : "{{ post.title | escape }}", "category" : "{{ post.category }}", "tags" : "{{ post.tags | join: ', ' }}", "url" : "{{ site.baseurl }}{{ post.url }}", "date" : "{{ post.date }}" } {% unless forloop.last %},{% endunless %} {% endfor %}]```You need to place the following code within the layout where you want the search to appear. (See the configuration section below to customize it)For example in **_layouts/default.html**:``````# ConfigurationCustomize SimpleJekyllSearch by passing in your configuration options:```SimpleJekyllSearch({ searchInput: document.getElementById('search-input'), resultsContainer: document.getElementById('results-container'), json: '/search.json',})```#### searchInput (Element) [required]The input element on which the plugin should listen for keyboard event and trigger the searching and rendering for articles.#### resultsContainer (Element) [required]The container element in which the search results should be rendered in. Typically an ``.#### json (String|JSON) [required]You can either pass in an URL to the `search.json` file, or the results in form of JSON directly, to save one round trip to get the data.#### searchResultTemplate (String) [optional]The template of a single rendered search result.The templating syntax is very simple: You just enclose the properties you want to replace with curly braces.E.g.The template```{title}```will render to the following```Welcome to Jekyll!```If the `search.json` contains this data```[ { "title" : "Welcome to Jekyll!", "category" : "", "tags" : "", "url" : "/jekyll/update/2014/11/01/welcome-to-jekyll.html", "date" : "2014-11-01 21:07:22 +0100" }]```#### templateMiddleware (Function) [optional]A function that will be called whenever a match in the template is found.It gets passed the current property name, property value, and the template.If the function returns a non-undefined value, it gets replaced in the template.This can be potentially useful for manipulating URLs etc.Example:```SimpleJekyllSearch({ ... middleware: function(prop, value, template){ if( prop === 'bar' ){ return value.replace(/^\//, '') } } ...})```See the [tests](src/Templater.test.js) for an in-depth code example#### noResultsText (String) [optional]The HTML that will be shown if the query didn't match anything.#### limit (Number) [optional]You can limit the number of posts rendered on the page.#### fuzzy (Boolean) [optional]Enable fuzzy search to allow less restrictive matching.#### exclude (Array) [optional]Pass in a list of terms you want to exclude (terms will be matched against a regex, so urls, words are allowed).## Enabling full-text searchReplace 'search.json' with the following code:```---layout: null---[ {% for post in site.posts %} { "title" : "{{ post.title | escape }}", "category" : "{{ post.category }}", "tags" : "{{ post.tags | join: ', ' }}", "url" : "{{ site.baseurl }}{{ post.url }}", "date" : "{{ post.date }}", "content" : "{{ post.content | strip_html | strip_newlines }}" } {% unless forloop.last %},{% endunless %} {% endfor %} , {% for page in site.pages %} { {% if page.title != nil %} "title" : "{{ page.title | escape }}", "category" : "{{ page.category }}", "tags" : "{{ page.tags | join: ', ' }}", "url" : "{{ site.baseurl }}{{ page.url }}", "date" : "{{ page.date }}", "content" : "{{ page.content | strip_html | strip_newlines }}" {% endif %} } {% unless forloop.last %},{% endunless %} {% endfor %}]```## If search isn't working due to invalid JSON- There is a filter plugin in the _plugins folder which should remove most characters that cause invalid JSON. To use it, add the simple_search_filter.rb file to your _plugins folder, and use `remove_chars` as a filter.For example: in search.json, replace```"content" : "{{ page.content | strip_html | strip_newlines }}"```with```"content" : "{{ page.content | strip_html | strip_newlines | remove_chars | escape }}"```If this doesn't work when using Github pages you can try ```jsonify``` to make sure the content is json compatible:```"content" : {{ page.content | jsonify }}```**Note: you don't need to use quotes ' " ' in this since ```jsonify``` automatically inserts them.**##Browser supportBrowser support should be about IE6+ with this `addEventListener` [shim](https://gist.github.com/eirikbacker/2864711#file-addeventlistener-polyfill-js)# Dev setup- `npm install` the dependencies.- `gulp watch` during development- `npm test` or `npm run test-watch` to run the unit tests"
} ,
{
} ,
{
} ,
{
} ,
{
} ,
{
} ,
{
} ,
{
} ,
{
} ,
{
} ,
{
} ,
{
} ,
{
} ,
{
} ,
{
} ,
{
} ,
{
}
]
If search isn’t working due to invalid JSON
- There is a filter plugin in the _plugins folder which should remove most characters that cause invalid JSON. To use it, add the simple_search_filter.rb file to your _plugins folder, and use
remove_chars
as a filter.
For example: in search.json, replace
"content" : "Simple-Jekyll-Search====================[![Build Status](https://travis-ci.org/christian-fei/Simple-Jekyll-Search.svg?branch=master)](https://travis-ci.org/christian-fei/Simple-Jekyll-Search)A JavaScript library to add search functionality to any Jekyll blog.Find it on [npmjs.com](https://www.npmjs.com/package/simple-jekyll-search)---idea from this [blog post](https://alexpearce.me/2012/04/simple-jekyll-searching/#disqus_thread)---### Promotion: check out [Pomodoro.cc](https://pomodoro.cc/)# [Demo](http://christian-fei.github.io/Simple-Jekyll-Search/)# Install```bower install --save simple-jekyll-search# ornpm install --save simple-jekyll-search```# Getting startedPlace the following code in a file called `search.json` in the **root** of your Jekyll blog.This file will be used as a small data source to perform the searches on the client side:```------[ {% for post in site.posts %} { "title" : "{{ post.title | escape }}", "category" : "{{ post.category }}", "tags" : "{{ post.tags | join: ', ' }}", "url" : "{{ site.baseurl }}{{ post.url }}", "date" : "{{ post.date }}" } {% unless forloop.last %},{% endunless %} {% endfor %}]```You need to place the following code within the layout where you want the search to appear. (See the configuration section below to customize it)For example in **_layouts/default.html**:``````# ConfigurationCustomize SimpleJekyllSearch by passing in your configuration options:```SimpleJekyllSearch({ searchInput: document.getElementById('search-input'), resultsContainer: document.getElementById('results-container'), json: '/search.json',})```#### searchInput (Element) [required]The input element on which the plugin should listen for keyboard event and trigger the searching and rendering for articles.#### resultsContainer (Element) [required]The container element in which the search results should be rendered in. Typically an ``.#### json (String|JSON) [required]You can either pass in an URL to the `search.json` file, or the results in form of JSON directly, to save one round trip to get the data.#### searchResultTemplate (String) [optional]The template of a single rendered search result.The templating syntax is very simple: You just enclose the properties you want to replace with curly braces.E.g.The template```{title}```will render to the following```Welcome to Jekyll!```If the `search.json` contains this data```[ { "title" : "Welcome to Jekyll!", "category" : "", "tags" : "", "url" : "/jekyll/update/2014/11/01/welcome-to-jekyll.html", "date" : "2014-11-01 21:07:22 +0100" }]```#### templateMiddleware (Function) [optional]A function that will be called whenever a match in the template is found.It gets passed the current property name, property value, and the template.If the function returns a non-undefined value, it gets replaced in the template.This can be potentially useful for manipulating URLs etc.Example:```SimpleJekyllSearch({ ... middleware: function(prop, value, template){ if( prop === 'bar' ){ return value.replace(/^\//, '') } } ...})```See the [tests](src/Templater.test.js) for an in-depth code example#### noResultsText (String) [optional]The HTML that will be shown if the query didn't match anything.#### limit (Number) [optional]You can limit the number of posts rendered on the page.#### fuzzy (Boolean) [optional]Enable fuzzy search to allow less restrictive matching.#### exclude (Array) [optional]Pass in a list of terms you want to exclude (terms will be matched against a regex, so urls, words are allowed).## Enabling full-text searchReplace 'search.json' with the following code:```---layout: null---[ {% for post in site.posts %} { "title" : "{{ post.title | escape }}", "category" : "{{ post.category }}", "tags" : "{{ post.tags | join: ', ' }}", "url" : "{{ site.baseurl }}{{ post.url }}", "date" : "{{ post.date }}", "content" : "{{ post.content | strip_html | strip_newlines }}" } {% unless forloop.last %},{% endunless %} {% endfor %} , {% for page in site.pages %} { {% if page.title != nil %} "title" : "{{ page.title | escape }}", "category" : "{{ page.category }}", "tags" : "{{ page.tags | join: ', ' }}", "url" : "{{ site.baseurl }}{{ page.url }}", "date" : "{{ page.date }}", "content" : "{{ page.content | strip_html | strip_newlines }}" {% endif %} } {% unless forloop.last %},{% endunless %} {% endfor %}]```## If search isn't working due to invalid JSON- There is a filter plugin in the _plugins folder which should remove most characters that cause invalid JSON. To use it, add the simple_search_filter.rb file to your _plugins folder, and use `remove_chars` as a filter.For example: in search.json, replace```"content" : "{{ page.content | strip_html | strip_newlines }}"```with```"content" : "{{ page.content | strip_html | strip_newlines | remove_chars | escape }}"```If this doesn't work when using Github pages you can try ```jsonify``` to make sure the content is json compatible:```"content" : {{ page.content | jsonify }}```**Note: you don't need to use quotes ' " ' in this since ```jsonify``` automatically inserts them.**##Browser supportBrowser support should be about IE6+ with this `addEventListener` [shim](https://gist.github.com/eirikbacker/2864711#file-addeventlistener-polyfill-js)# Dev setup- `npm install` the dependencies.- `gulp watch` during development- `npm test` or `npm run test-watch` to run the unit tests"
with
"content" : "Simple-Jekyll-Search====================[![Build Status](https://travis-ci.org/christian-fei/Simple-Jekyll-Search.svg?branch=master)](https://travis-ci.org/christian-fei/Simple-Jekyll-Search)A JavaScript library to add search functionality to any Jekyll blog.Find it on [npmjs.com](https://www.npmjs.com/package/simple-jekyll-search)---idea from this [blog post](https://alexpearce.me/2012/04/simple-jekyll-searching/#disqus_thread)---### Promotion: check out [Pomodoro.cc](https://pomodoro.cc/)# [Demo](http://christian-fei.github.io/Simple-Jekyll-Search/)# Install```bower install --save simple-jekyll-search# ornpm install --save simple-jekyll-search```# Getting startedPlace the following code in a file called `search.json` in the **root** of your Jekyll blog.This file will be used as a small data source to perform the searches on the client side:```------[ {% for post in site.posts %} { "title" : "{{ post.title | escape }}", "category" : "{{ post.category }}", "tags" : "{{ post.tags | join: ', ' }}", "url" : "{{ site.baseurl }}{{ post.url }}", "date" : "{{ post.date }}" } {% unless forloop.last %},{% endunless %} {% endfor %}]```You need to place the following code within the layout where you want the search to appear. (See the configuration section below to customize it)For example in **_layouts/default.html**:``````# ConfigurationCustomize SimpleJekyllSearch by passing in your configuration options:```SimpleJekyllSearch({ searchInput: document.getElementById('search-input'), resultsContainer: document.getElementById('results-container'), json: '/search.json',})```#### searchInput (Element) [required]The input element on which the plugin should listen for keyboard event and trigger the searching and rendering for articles.#### resultsContainer (Element) [required]The container element in which the search results should be rendered in. Typically an ``.#### json (String|JSON) [required]You can either pass in an URL to the `search.json` file, or the results in form of JSON directly, to save one round trip to get the data.#### searchResultTemplate (String) [optional]The template of a single rendered search result.The templating syntax is very simple: You just enclose the properties you want to replace with curly braces.E.g.The template```{title}```will render to the following```Welcome to Jekyll!```If the `search.json` contains this data```[ { "title" : "Welcome to Jekyll!", "category" : "", "tags" : "", "url" : "/jekyll/update/2014/11/01/welcome-to-jekyll.html", "date" : "2014-11-01 21:07:22 +0100" }]```#### templateMiddleware (Function) [optional]A function that will be called whenever a match in the template is found.It gets passed the current property name, property value, and the template.If the function returns a non-undefined value, it gets replaced in the template.This can be potentially useful for manipulating URLs etc.Example:```SimpleJekyllSearch({ ... middleware: function(prop, value, template){ if( prop === 'bar' ){ return value.replace(/^\//, '') } } ...})```See the [tests](src/Templater.test.js) for an in-depth code example#### noResultsText (String) [optional]The HTML that will be shown if the query didn't match anything.#### limit (Number) [optional]You can limit the number of posts rendered on the page.#### fuzzy (Boolean) [optional]Enable fuzzy search to allow less restrictive matching.#### exclude (Array) [optional]Pass in a list of terms you want to exclude (terms will be matched against a regex, so urls, words are allowed).## Enabling full-text searchReplace 'search.json' with the following code:```---layout: null---[ {% for post in site.posts %} { "title" : "{{ post.title | escape }}", "category" : "{{ post.category }}", "tags" : "{{ post.tags | join: ', ' }}", "url" : "{{ site.baseurl }}{{ post.url }}", "date" : "{{ post.date }}", "content" : "{{ post.content | strip_html | strip_newlines }}" } {% unless forloop.last %},{% endunless %} {% endfor %} , {% for page in site.pages %} { {% if page.title != nil %} "title" : "{{ page.title | escape }}", "category" : "{{ page.category }}", "tags" : "{{ page.tags | join: ', ' }}", "url" : "{{ site.baseurl }}{{ page.url }}", "date" : "{{ page.date }}", "content" : "{{ page.content | strip_html | strip_newlines }}" {% endif %} } {% unless forloop.last %},{% endunless %} {% endfor %}]```## If search isn't working due to invalid JSON- There is a filter plugin in the _plugins folder which should remove most characters that cause invalid JSON. To use it, add the simple_search_filter.rb file to your _plugins folder, and use `remove_chars` as a filter.For example: in search.json, replace```"content" : "{{ page.content | strip_html | strip_newlines }}"```with```"content" : "{{ page.content | strip_html | strip_newlines | remove_chars | escape }}"```If this doesn't work when using Github pages you can try ```jsonify``` to make sure the content is json compatible:```"content" : {{ page.content | jsonify }}```**Note: you don't need to use quotes ' " ' in this since ```jsonify``` automatically inserts them.**##Browser supportBrowser support should be about IE6+ with this `addEventListener` [shim](https://gist.github.com/eirikbacker/2864711#file-addeventlistener-polyfill-js)# Dev setup- `npm install` the dependencies.- `gulp watch` during development- `npm test` or `npm run test-watch` to run the unit tests"
If this doesn’t work when using Github pages you can try jsonify
to make sure the content is json compatible:
"content" : "Simple-Jekyll-Search\n====================\n\n[![Build Status](https://travis-ci.org/christian-fei/Simple-Jekyll-Search.svg?branch=master)](https://travis-ci.org/christian-fei/Simple-Jekyll-Search)\n\nA JavaScript library to add search functionality to any Jekyll blog.\n\nFind it on [npmjs.com](https://www.npmjs.com/package/simple-jekyll-search)\n\n---\n\nidea from this [blog post](https://alexpearce.me/2012/04/simple-jekyll-searching/#disqus_thread)\n\n---\n\n### Promotion: check out [Pomodoro.cc](https://pomodoro.cc/)\n\n# [Demo](http://christian-fei.github.io/Simple-Jekyll-Search/)\n\n# Install\n\n```\nbower install --save simple-jekyll-search\n# or\nnpm install --save simple-jekyll-search\n```\n\n# Getting started\n\nPlace the following code in a file called `search.json` in the **root** of your Jekyll blog.\n\nThis file will be used as a small data source to perform the searches on the client side:\n\n```\n---\n---\n[\n {% for post in site.posts %}\n {\n \"title\" : \"{{ post.title | escape }}\",\n \"category\" : \"{{ post.category }}\",\n \"tags\" : \"{{ post.tags | join: ', ' }}\",\n \"url\" : \"{{ site.baseurl }}{{ post.url }}\",\n \"date\" : \"{{ post.date }}\"\n } {% unless forloop.last %},{% endunless %}\n {% endfor %}\n]\n```\n\nYou need to place the following code within the layout where you want the search to appear. (See the configuration section below to customize it)\n\nFor example in **_layouts/default.html**:\n\n```\n<!-- Html Elements for Search -->\n<div id=\"search-container\">\n<input type=\"text\" id=\"search-input\" placeholder=\"search...\">\n<ul id=\"results-container\"></ul>\n</div>\n\n<!-- Script pointing to jekyll-search.js -->\n<script src=\"{{ site.baseurl }}/bower_components/simple-jekyll-search/dest/jekyll-search.js\" type=\"text/javascript\"></script>\n```\n\n\n# Configuration\n\nCustomize SimpleJekyllSearch by passing in your configuration options:\n\n```\nSimpleJekyllSearch({\n searchInput: document.getElementById('search-input'),\n resultsContainer: document.getElementById('results-container'),\n json: '/search.json',\n})\n```\n\n#### searchInput (Element) [required]\n\nThe input element on which the plugin should listen for keyboard event and trigger the searching and rendering for articles.\n\n\n#### resultsContainer (Element) [required]\n\nThe container element in which the search results should be rendered in. Typically an `<ul>`.\n\n\n#### json (String|JSON) [required]\n\nYou can either pass in an URL to the `search.json` file, or the results in form of JSON directly, to save one round trip to get the data.\n\n\n#### searchResultTemplate (String) [optional]\n\nThe template of a single rendered search result.\n\nThe templating syntax is very simple: You just enclose the properties you want to replace with curly braces.\n\nE.g.\n\nThe template\n\n```\n<li><a href=\"{url}\">{title}</a></li>\n```\n\nwill render to the following\n\n```\n<li><a href=\"/jekyll/update/2014/11/01/welcome-to-jekyll.html\">Welcome to Jekyll!</a></li>\n```\n\nIf the `search.json` contains this data\n\n```\n[\n {\n \"title\" : \"Welcome to Jekyll!\",\n \"category\" : \"\",\n \"tags\" : \"\",\n \"url\" : \"/jekyll/update/2014/11/01/welcome-to-jekyll.html\",\n \"date\" : \"2014-11-01 21:07:22 +0100\"\n }\n]\n```\n\n\n#### templateMiddleware (Function) [optional]\n\nA function that will be called whenever a match in the template is found.\n\nIt gets passed the current property name, property value, and the template.\n\nIf the function returns a non-undefined value, it gets replaced in the template.\n\nThis can be potentially useful for manipulating URLs etc.\n\nExample:\n\n```\nSimpleJekyllSearch({\n ...\n middleware: function(prop, value, template){\n if( prop === 'bar' ){\n return value.replace(/^\\//, '')\n }\n }\n ...\n})\n```\n\nSee the [tests](src/Templater.test.js) for an in-depth code example\n\n\n\n#### noResultsText (String) [optional]\n\nThe HTML that will be shown if the query didn't match anything.\n\n\n#### limit (Number) [optional]\n\nYou can limit the number of posts rendered on the page.\n\n\n#### fuzzy (Boolean) [optional]\n\nEnable fuzzy search to allow less restrictive matching.\n\n#### exclude (Array) [optional]\n\nPass in a list of terms you want to exclude (terms will be matched against a regex, so urls, words are allowed).\n\n\n\n\n\n\n\n## Enabling full-text search\n\nReplace 'search.json' with the following code:\n\n```\n---\nlayout: null\n---\n[\n {% for post in site.posts %}\n {\n \"title\" : \"{{ post.title | escape }}\",\n \"category\" : \"{{ post.category }}\",\n \"tags\" : \"{{ post.tags | join: ', ' }}\",\n \"url\" : \"{{ site.baseurl }}{{ post.url }}\",\n \"date\" : \"{{ post.date }}\",\n \"content\" : \"{{ post.content | strip_html | strip_newlines }}\"\n } {% unless forloop.last %},{% endunless %}\n {% endfor %}\n ,\n {% for page in site.pages %}\n {\n {% if page.title != nil %}\n \"title\" : \"{{ page.title | escape }}\",\n \"category\" : \"{{ page.category }}\",\n \"tags\" : \"{{ page.tags | join: ', ' }}\",\n \"url\" : \"{{ site.baseurl }}{{ page.url }}\",\n \"date\" : \"{{ page.date }}\",\n \"content\" : \"{{ page.content | strip_html | strip_newlines }}\"\n {% endif %}\n } {% unless forloop.last %},{% endunless %}\n {% endfor %}\n]\n```\n\n\n\n## If search isn't working due to invalid JSON\n\n- There is a filter plugin in the _plugins folder which should remove most characters that cause invalid JSON. To use it, add the simple_search_filter.rb file to your _plugins folder, and use `remove_chars` as a filter.\n\nFor example: in search.json, replace\n```\n\"content\" : \"{{ page.content | strip_html | strip_newlines }}\"\n```\nwith\n```\n\"content\" : \"{{ page.content | strip_html | strip_newlines | remove_chars | escape }}\"\n```\n\nIf this doesn't work when using Github pages you can try ```jsonify``` to make sure the content is json compatible:\n```\n\"content\" : {{ page.content | jsonify }}\n```\n**Note: you don't need to use quotes ' \" ' in this since ```jsonify``` automatically inserts them.**\n\n\n\n\n\n##Browser support\n\nBrowser support should be about IE6+ with this `addEventListener` [shim](https://gist.github.com/eirikbacker/2864711#file-addeventlistener-polyfill-js)\n\n\n\n\n\n\n\n# Dev setup\n\n- `npm install` the dependencies.\n\n- `gulp watch` during development\n\n- `npm test` or `npm run test-watch` to run the unit tests\n"
Note: you don’t need to use quotes ‘ “ ‘ in this since jsonify
automatically inserts them.
##Browser support
Browser support should be about IE6+ with this addEventListener
shim
Dev setup
-
npm install
the dependencies. -
gulp watch
during development -
npm test
ornpm run test-watch
to run the unit tests