Refactor plugin to be compatible with Drone 0.5 (#13)

* Refactor plugin to be compatible with Drone 0.5

* Add vendor files

* Re-add logo.svg, make loading environment from .env file optional, and use drone-go/template

* Fix README

* Fix issue with date formatting, update the DOCS, and improve types

* Add working directory and volume mount to README example
This commit is contained in:
Michael de Wit 2017-01-16 13:20:59 +01:00 committed by GitHub
parent 73b37a0fd6
commit f128c947ab
370 changed files with 166756 additions and 422 deletions

View file

@ -1 +0,0 @@
eyJhbGciOiJSU0EtT0FFUCIsImVuYyI6IkExMjhHQ00ifQ.xT1t3tFZvZOhWHLDkvoABEebjQ_oyQ79Iko6UCYNIjzj8_-LOQdCJz93SyHriWHg7K3J5Q_PkPeZY5XtiJRBg62kAhpT4NxnWMQeXC3u4qfGqUJNFNRYwkz-yCHxAKLlKIPzqonPPo2acs1TWs5hO5yV14hVhNA7ahjhDPl59ciYvlUE6b1rt1Wjua52sla9mPkH4Tp73OWZRBEMr4KEi8TD4O2uWgZWzHYaYLvAlHwz-TPFYM1ZEvbukBSOjqSYIXRW4NWGbxdh9cmHcRFhmT53_rlgfuwKECdARE72SaI49HXSSp-l2rP7TK5U9bG8lVwo_9U3VDCZg90Br25K-w.YTWp3MKQoq40d1p4.YSjdjah29g9lAV42F24axBtHZZTU8Cdufn2ROCsgN0Ql-xL_Cx3YFqDHrw-s1l2YCvj5Mzhi7w49LKnKf_icMOC28cM8UcrL0_VTucqOssvNxSyRSiSNlgUkeTVpuMMpXfofgW4nHiCjVj-5c4nIELeMWoFE0LcRmVRJzDpE6MExIaybev1X-lwC9boBTDGleKJPteZYotkNKgA5CBzWucoD0wcm3igH_6-ZOLvltAnjWyk_KI4akseGLh9T112XcRhvmUtJRxvD3adU5zk2A5oYHYxFaRxy5Nr4xTIpMdMk_S9W6_4JgztkAtsVIjWRAB-_CqsLS7tHNBmrT7ciH6DQlpo6ZKJ42T4AWIWfO0LFL_dI39rn2YZSfz-dEWl9ZsNpc-42LB9CRKjxdWo8W48qnMwrdUL0ZgO6H0kyVTw.IOn4GNuz0vEgnHB8_WecDQ

View file

@ -1,33 +1,25 @@
build:
image: golang:1.5
environment:
- CGO_ENABLED=0
commands:
- make deps
- make vet
- make build
- make test
workspace:
base: /go
path: src/github.com/drone-plugins/drone-email
publish:
coverage:
pipeline:
test:
image: golang:1.6
environment:
- CGO_ENABLED=0
- GOPATH=/go
commands:
- go vet
- go test -cover -coverprofile=coverage.out
- go build -ldflags "-s -w -X main.build=$DRONE_BUILD_NUMBER" -a -tags netgo
latest:
image: plugins/docker
repo: plugins/email
tags: [ "latest", "1.0", "1" ]
when:
branch: master
docker:
username: $$DOCKER_USER
password: $$DOCKER_PASS
email: $$DOCKER_EMAIL
repo: plugins/drone-email
tag: latest
when:
branch: master
docker:
username: $$DOCKER_USER
password: $$DOCKER_PASS
email: $$DOCKER_EMAIL
repo: plugins/drone-email
tag: develop
when:
branch: develop
event: push
plugin:
name: Email

1
.gitignore vendored
View file

@ -22,6 +22,7 @@ _testmain.go
*.exe
*.test
*.prof
.env
coverage.out
drone-email

90
DOCS.md
View file

@ -1,20 +1,27 @@
Use this plugin for sending build status notifications via Email. You can
override the default configuration with the following parameters:
Use the Email plugin for sending build status notifications via email.
* `from` - Send notifications from this address
* `host` - SMTP server host
* `port` - SMTP server port, defaults to `587`
* `username` - SMTP username
* `password` - SMTP password
* `recipients` - List of recipients, defaults to commit email
## Config
You can configure the plugin using the following parameters:
* **from** - Send notifications from this address
* **host** - SMTP server host
* **port** - SMTP server port, defaults to `587`
* **username** - SMTP username
* **password** - SMTP password
* **skip_verify** - Skip verification of SSL certificates, defaults to `false`
* **recipients** - List of recipients to send this mail to (besides the commit author)
* **recipients_only** - Do not send mails to the commit author, but only to **recipients**, defaults to `false`
* **subject** - The subject line template
* **body** - The email body template
## Example
The following is a sample configuration in your .drone.yml file:
```yaml
notify:
email:
pipeline:
notify:
image: plugins/email
from: noreply@github.com
host: smtp.mailgun.org
username: octocat
@ -23,6 +30,45 @@ notify:
- octocat@github.com
```
### Secrets
The Email plugin supports reading credentials and other parameters from the Drone secret store. This is strongly recommended instead of storing credentials in the pipeline configuration in plain text.
```diff
pipeline:
notify:
image: plugins/email
from: noreply@github.com
host: smtp.mailgun.org
- username: octocat
- password: 12345
recipients:
- octocat@github.com
```
Use the command line utility to add the secrets to the store:
```sh
drone secret add --image=plugins/email \
octocat/hello-world EMAIL_USERNAME octocat
drone secret add --image=plugins/email \
octocat/hello-world EMAIL_PASSWORD 12345
```
Then sign the YAML file after all secrets are added:
```sh
drone sign octocat/hello-world
```
The following secret values can be set to configure the plugin:
* **EMAIL_HOST** - corresponds to **host**
* **EMAIL_PORT** - corresponds to **port**
* **EMAIL_USERNAME** - corresponds to **username**
* **EMAIL_PASSWORD** - corresponds to **password**
* **EMAIL_RECIPIENTS** - corresponds to **recipients**
See [Secret Guide](http://readme.drone.io/usage/secret-guide/) for additional information on secrets.
### Custom Templates
In some cases you may want to customize the look and feel of the email message
@ -30,24 +76,23 @@ so you can use custom templates. For the use case we expose the following
additional parameters, all of the accept a custom handlebars template, directly
provided as a string or as a remote URL which gets fetched and parsed:
* `subject` - A handlebars template to create a custom subject. For more
* **subject** - A handlebars template to create a custom subject. For more
details take a look at the [docs](http://handlebarsjs.com/). You can see the
default template [here](https://github.com/drone-plugins/drone-email/blob/master/template.go#L4)
* `template` - A handlebars template to create a custom template. For more
default template [here](https://github.com/drone-plugins/drone-email/blob/master/defaults.go#L14)
* **body** - A handlebars template to create a custom template. For more
details take a look at the [docs](http://handlebarsjs.com/). You can see the
default template [here](https://github.com/drone-plugins/drone-email/blob/master/template.go#L8-L292)
default template [here](https://github.com/drone-plugins/drone-email/blob/master/defaults.go#L19-L267)
Example configuration that generate a custom email:
```yaml
notify:
email:
pipeline:
notify:
image: plugins/email
from: noreply@github.com
host: smtp.mailgun.org
username: octocat
password: 12345
recipients:
- octocat@github.com
subject: >
[{{ build.status }}]
{{ repo.owner }}/{{ repo.name }}
@ -63,18 +108,17 @@ as it leads to an unsecure environment. Please use this option only within your
intranet and/or with truested resources. For this use case we expose the
following additional parameter:
* `skip_verify` - Skip verification of SSL certificates
* **skip_verify** - Skip verification of SSL certificates
Example configuration that skips SSL verification:
```yaml
notify:
email:
pipeline:
notify:
image: plugins/email
from: noreply@github.com
host: smtp.mailgun.org
username: octocat
password: 12345
skip_verify: true
recipients:
- octocat@github.com
```

View file

@ -1,14 +1,7 @@
# Docker image for the Drone Email plugin
#
# cd $GOPATH/src/github.com/drone-plugins/drone-email
# make deps build docker
FROM alpine:3.2
FROM alpine:3.4
RUN apk update && \
apk add \
ca-certificates && \
rm -rf /var/cache/apk/*
apk add --no-cache ca-certificates
ADD drone-email /bin/
ENTRYPOINT ["/bin/drone-email"]

7
Dockerfile.armhf Normal file
View file

@ -0,0 +1,7 @@
FROM armhfbuild/alpine:3.4
RUN apk update && \
apk add --no-cache ca-certificates
ADD drone-email /bin/
ENTRYPOINT ["/bin/drone-email"]

View file

@ -1,34 +0,0 @@
.PHONY: all clean deps fmt vet test docker
EXECUTABLE ?= drone-email
IMAGE ?= plugins/$(EXECUTABLE)
COMMIT ?= $(shell git rev-parse --short HEAD)
LDFLAGS = -X "main.buildCommit=$(COMMIT)"
PACKAGES = $(shell go list ./... | grep -v /vendor/)
all: deps build test
clean:
go clean -i ./...
deps:
go get -t ./...
fmt:
go fmt $(PACKAGES)
vet:
go vet $(PACKAGES)
test:
@for PKG in $(PACKAGES); do go test -cover -coverprofile $$GOPATH/src/$$PKG/coverage.out $$PKG || exit 1; done;
docker:
GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -ldflags '-s -w $(LDFLAGS)'
docker build --rm -t $(IMAGE) .
$(EXECUTABLE): $(wildcard *.go)
go build -ldflags '-s -w $(LDFLAGS)'
build: $(EXECUTABLE)

126
README.md
View file

@ -1,113 +1,57 @@
# drone-email
[![Build Status](http://beta.drone.io/api/badges/drone-plugins/drone-email/status.svg)](http://beta.drone.io/drone-plugins/drone-email)
[![Coverage Status](https://aircover.co/badges/drone-plugins/drone-email/coverage.svg)](https://aircover.co/drone-plugins/drone-email)
[![](https://badge.imagelayers.io/plugins/drone-email:latest.svg)](https://imagelayers.io/?images=plugins/drone-email:latest 'Get your own badge on imagelayers.io')
[![Go Doc](https://godoc.org/github.com/drone-plugins/drone-email?status.svg)](http://godoc.org/github.com/drone-plugins/drone-email)
[![Go Report](https://goreportcard.com/badge/github.com/drone-plugins/drone-email)](https://goreportcard.com/report/github.com/drone-plugins/drone-email)
[![Join the chat at https://gitter.im/drone/drone](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/drone/drone)
Drone plugin to send build status notifications via Email. For the usage information and a listing of the available options please take a look at [the docs](DOCS.md).
## Binary
Build the binary using `make`:
Build the binary with the following command:
```
make deps build
```
### Example
```sh
./drone-email <<EOF
{
"repo": {
"clone_url": "git://github.com/drone/drone",
"owner": "drone",
"name": "drone",
"full_name": "drone/drone"
},
"system": {
"link_url": "https://beta.drone.io"
},
"build": {
"number": 22,
"status": "success",
"started_at": 1421029603,
"finished_at": 1421029813,
"message": "Update the Readme",
"author": "johnsmith",
"author_email": "john.smith@gmail.com",
"event": "push",
"branch": "master",
"commit": "436b7a6e2abaddfd35740527353e78a227ddcb2c",
"ref": "refs/heads/master"
},
"workspace": {
"root": "/drone/src",
"path": "/drone/src/github.com/drone/drone"
},
"vargs": {
"from": "noreply@foo.com",
"host": "smtp.mailgun.org",
"port": 587,
"username": "",
"password": "",
"recipients": [
"octocat@github.com"
]
}
}
EOF
go build
```
## Docker
Build the container using `make`:
Build the docker image with the following commands:
```
make deps docker
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -tags netgo
docker build -t plugins/email .
```
Please note incorrectly building the image for the correct x64 linux and with GCO disabled will result in an error when running the Docker image:
```
docker: Error response from daemon: Container command
'/bin/drone-email' not found or does not exist..
```
### Example
Execute from the working directory:
```sh
docker run -i plugins/drone-email <<EOF
{
"repo": {
"clone_url": "git://github.com/drone/drone",
"owner": "drone",
"name": "drone",
"full_name": "drone/drone"
},
"system": {
"link_url": "https://beta.drone.io"
},
"build": {
"number": 22,
"status": "success",
"started_at": 1421029603,
"finished_at": 1421029813,
"message": "Update the Readme",
"author": "johnsmith",
"author_email": "john.smith@gmail.com"
"event": "push",
"branch": "master",
"commit": "436b7a6e2abaddfd35740527353e78a227ddcb2c",
"ref": "refs/heads/master"
},
"workspace": {
"root": "/drone/src",
"path": "/drone/src/github.com/drone/drone"
},
"vargs": {
"from": "noreply@foo.com",
"host": "smtp.mailgun.org",
"port": 587,
"username": "",
"password": "",
"recipients": [
"octocat@github.com"
]
}
}
EOF
docker run --rm \
-e PLUGIN_FROM=drone@test.test \
-e PLUGIN_HOST=smtp.test.test \
-e PLUGIN_USERNAME=drone \
-e PLUGIN_PASSWORD=test
-e DRONE_REPO_OWNER=octocat \
-e DRONE_REPO_NAME=hello-world \
-e DRONE_COMMIT_SHA=7fd1a60b01f91b314f59955a4e4d4e80d8edf11d \
-e DRONE_COMMIT_BRANCH=master \
-e DRONE_COMMIT_AUTHOR=octocat \
-e DRONE_COMMIT_AUTHOR_EMAIL=octocat@test.test \
-e DRONE_BUILD_NUMBER=1 \
-e DRONE_BUILD_STATUS=success \
-e DRONE_BUILD_LINK=http://github.com/octocat/hello-world \
-e DRONE_COMMIT_MESSAGE="Hello world!" \
-v $(pwd):$(pwd) \
-w $(pwd) \
plugins/email
```

View file

@ -1,16 +1,26 @@
package main
var defaultSubject = `
const (
// DefaultPort is the default SMTP port to use
DefaultPort = 587
// DefaultOnlyRecipients controls wether to exclude the commit author by default
DefaultOnlyRecipients = false
// DefaultSkipVerify controls wether to skip SSL verification for the SMTP server
DefaultSkipVerify = false
)
// DefaultSubject is the default subject template to use for the email
const DefaultSubject = `
[{{ build.status }}] {{ repo.owner }}/{{ repo.name }} ({{ build.branch }} - {{ truncate build.commit 8 }})
`
var defaultTemplate = `
// DefaultTemplate is the default body template to use for the email
const DefaultTemplate = `
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta name="viewport" content="width=device-width" />
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<style>
* {
margin: 0;
@ -19,7 +29,6 @@ var defaultTemplate = `
box-sizing: border-box;
font-size: 14px;
}
body {
-webkit-font-smoothing: antialiased;
-webkit-text-size-adjust: none;
@ -28,16 +37,13 @@ var defaultTemplate = `
line-height: 1.6;
background-color: #f6f6f6;
}
table td {
vertical-align: top;
}
.body-wrap {
background-color: #f6f6f6;
width: 100%;
}
.container {
display: block !important;
max-width: 600px !important;
@ -45,33 +51,27 @@ var defaultTemplate = `
/* makes it centered */
clear: both !important;
}
.content {
max-width: 600px;
margin: 0 auto;
display: block;
padding: 20px;
}
.main {
background: #fff;
border: 1px solid #e9e9e9;
border-radius: 3px;
}
.content-wrap {
padding: 20px;
}
.content-block {
padding: 0 0 20px;
}
.header {
width: 100%;
margin-bottom: 20px;
}
h1, h2, h3 {
font-family: "Helvetica Neue", Helvetica, Arial, "Lucida Grande", sans-serif;
color: #000;
@ -79,74 +79,59 @@ var defaultTemplate = `
line-height: 1.2;
font-weight: 400;
}
h1 {
font-size: 32px;
font-weight: 500;
}
h2 {
font-size: 24px;
}
h3 {
font-size: 18px;
}
hr {
border: 1px solid #e9e9e9;
margin: 20px 0;
height: 1px;
padding: 0;
}
p,
ul,
ol {
margin-bottom: 10px;
font-weight: normal;
}
p li,
ul li,
ol li {
margin-left: 5px;
list-style-position: inside;
}
a {
color: #348eda;
text-decoration: underline;
}
.last {
margin-bottom: 0;
}
.first {
margin-top: 0;
}
.padding {
padding: 10px 0;
}
.aligncenter {
text-align: center;
}
.alignright {
text-align: right;
}
.alignleft {
text-align: left;
}
.clear {
clear: both;
}
.alert {
font-size: 16px;
color: #fff;
@ -155,26 +140,21 @@ var defaultTemplate = `
text-align: center;
border-radius: 3px 3px 0 0;
}
.alert a {
color: #fff;
text-decoration: none;
font-weight: 500;
font-size: 16px;
}
.alert.alert-warning {
background: #ff9f00;
}
.alert.alert-bad {
background: #d0021b;
}
.alert.alert-good {
background: #68b90f;
}
@media only screen and (max-width: 640px) {
h1,
h2,
@ -182,23 +162,18 @@ var defaultTemplate = `
font-weight: 600 !important;
margin: 20px 0 5px !important;
}
h1 {
font-size: 22px !important;
}
h2 {
font-size: 18px !important;
}
h3 {
font-size: 16px !important;
}
.container {
width: 100% !important;
}
.content,
.content-wrapper {
padding: 10px !important;
@ -216,13 +191,13 @@ var defaultTemplate = `
<tr>
{{#success build.status}}
<td class="alert alert-good">
<a href="{{ system.link_url }}/{{ repo.owner }}/{{ repo.name }}/{{ build.number }}">
<a href="{{ build.link }}">
Successful build #{{ build.number }}
</a>
</td>
{{else}}
<td class="alert alert-bad">
<a href="{{ system.link_url }}/{{ repo.owner }}/{{ repo.name }}/{{ build.number }}">
<a href="{{ build.link }}">
Failed build #{{ build.number }}
</a>
</td>
@ -244,7 +219,7 @@ var defaultTemplate = `
Author:
</td>
<td>
{{ build.author }}
{{ build.author.name }} ({{ build.author.email }})
</td>
</tr>
<tr>
@ -265,10 +240,10 @@ var defaultTemplate = `
</tr>
<tr>
<td>
Time:
Started at:
</td>
<td>
{{ duration build.started_at build.finished_at }}
{{ datetime build.started "Mon Jan 2 15:04:05 MST 2006" "Local" }}
</td>
</tr>
</table>

245
main.go
View file

@ -4,55 +4,218 @@ import (
"fmt"
"os"
"github.com/drone/drone-go/drone"
"github.com/drone/drone-go/plugin"
log "github.com/Sirupsen/logrus"
"github.com/joho/godotenv"
"github.com/urfave/cli"
)
var (
buildCommit string
)
var build = "0" // build number set at compile-time
func main() {
fmt.Printf("Drone Email Plugin built from %s\n", buildCommit)
system := drone.System{}
repo := drone.Repo{}
build := drone.Build{}
vargs := Params{}
plugin.Param("system", &system)
plugin.Param("repo", &repo)
plugin.Param("build", &build)
plugin.Param("vargs", &vargs)
plugin.MustParse()
if len(vargs.Recipients) == 0 {
vargs.Recipients = []string{
build.Email,
}
// Load env-file if it exists first
if env := os.Getenv("PLUGIN_ENV_FILE"); env != "" {
godotenv.Load(env)
}
if vargs.Subject == "" {
vargs.Subject = defaultSubject
app := cli.NewApp()
app.Name = "email plugin"
app.Usage = "email plugin"
app.Action = run
app.Version = fmt.Sprintf("1.0.%s", build)
app.Flags = []cli.Flag{
// Plugin environment
cli.StringFlag{
Name: "from",
Usage: "from address",
EnvVar: "PLUGIN_FROM",
},
cli.StringFlag{
Name: "host",
Usage: "smtp host",
EnvVar: "EMAIL_HOST,PLUGIN_HOST",
},
cli.IntFlag{
Name: "port",
Value: DefaultPort,
Usage: "smtp port",
EnvVar: "EMAIL_PORT,PLUGIN_PORT",
},
cli.StringFlag{
Name: "username",
Usage: "smtp server username",
EnvVar: "EMAIL_USERNAME,PLUGIN_USERNAME",
},
cli.StringFlag{
Name: "password",
Usage: "smtp server password",
EnvVar: "EMAIL_PASSWORD,PLUGIN_PASSWORD",
},
cli.BoolFlag{
Name: "skip.verify",
Usage: "skip tls verify",
EnvVar: "PLUGIN_SKIP_VERIFY",
},
cli.StringSliceFlag{
Name: "recipients",
Usage: "recipient addresses",
EnvVar: "EMAIL_RECIPIENTS,PLUGIN_RECIPIENTS",
},
cli.BoolFlag{
Name: "recipients.only",
Usage: "send to recipients only",
EnvVar: "PLUGIN_RECIPIENTS_ONLY",
},
cli.StringFlag{
Name: "template.subject",
Value: DefaultSubject,
Usage: "subject template",
EnvVar: "PLUGIN_SUBJECT",
},
cli.StringFlag{
Name: "template.body",
Value: DefaultTemplate,
Usage: "body template",
EnvVar: "PLUGIN_BODY",
},
// Drone environment
cli.StringFlag{
Name: "repo.owner",
Usage: "repository owner",
EnvVar: "DRONE_REPO_OWNER",
},
cli.StringFlag{
Name: "repo.name",
Usage: "repository name",
EnvVar: "DRONE_REPO_NAME",
},
cli.StringFlag{
Name: "commit.sha",
Usage: "git commit sha",
EnvVar: "DRONE_COMMIT_SHA",
},
cli.StringFlag{
Name: "commit.ref",
Value: "refs/heads/master",
Usage: "git commit ref",
EnvVar: "DRONE_COMMIT_REF",
},
cli.StringFlag{
Name: "commit.branch",
Value: "master",
Usage: "git commit branch",
EnvVar: "DRONE_COMMIT_BRANCH",
},
cli.StringFlag{
Name: "commit.author.name",
Usage: "git author name",
EnvVar: "DRONE_COMMIT_AUTHOR",
},
cli.StringFlag{
Name: "commit.author.email",
Usage: "git author email",
EnvVar: "DRONE_COMMIT_AUTHOR_EMAIL",
},
cli.StringFlag{
Name: "commit.author.avatar",
Usage: "git author avatar",
EnvVar: "DRONE_COMMIT_AUTHOR_AVATAR",
},
cli.StringFlag{
Name: "commit.message",
Usage: "git commit message",
EnvVar: "DRONE_COMMIT_MESSAGE",
},
cli.StringFlag{
Name: "build.event",
Value: "push",
Usage: "build event",
EnvVar: "DRONE_BUILD_EVENT",
},
cli.IntFlag{
Name: "build.number",
Usage: "build number",
EnvVar: "DRONE_BUILD_NUMBER",
},
cli.StringFlag{
Name: "build.status",
Usage: "build status",
Value: "success",
EnvVar: "DRONE_BUILD_STATUS",
},
cli.StringFlag{
Name: "build.link",
Usage: "build link",
EnvVar: "DRONE_BUILD_LINK",
},
cli.Int64Flag{
Name: "build.started",
Usage: "build started",
EnvVar: "DRONE_BUILD_STARTED",
},
cli.Int64Flag{
Name: "build.created",
Usage: "build created",
EnvVar: "DRONE_BUILD_CREATED",
},
cli.StringFlag{
Name: "build.tag",
Usage: "build tag",
EnvVar: "DRONE_TAG",
},
cli.Int64Flag{
Name: "job.started",
Usage: "job started",
EnvVar: "DRONE_JOB_STARTED",
},
}
if vargs.Template == "" {
vargs.Template = defaultTemplate
}
if vargs.Port == 0 {
vargs.Port = 587
}
err := Send(&Context{
System: system,
Repo: repo,
Build: build,
Vargs: vargs,
})
if err != nil {
fmt.Println(err)
if err := app.Run(os.Args); err != nil {
log.Fatal(err)
os.Exit(1)
}
}
func run(c *cli.Context) error {
plugin := Plugin{
Repo: Repo{
Owner: c.String("repo.owner"),
Name: c.String("repo.name"),
},
Build: Build{
Tag: c.String("build.tag"),
Number: c.Int("build.number"),
Event: c.String("build.event"),
Status: c.String("build.status"),
Commit: c.String("commit.sha"),
Ref: c.String("commit.ref"),
Branch: c.String("commit.branch"),
Author: Author{
Name: c.String("commit.author.name"),
Email: c.String("commit.author.email"),
Avatar: c.String("commit.author.avatar"),
},
Message: c.String("commit.message"),
Link: c.String("build.link"),
Started: c.Int64("build.started"),
Created: c.Int64("build.created"),
},
Job: Job{
Started: c.Int64("job.started"),
},
Config: Config{
From: c.String("from"),
Host: c.String("host"),
Port: c.Int("port"),
Username: c.String("username"),
Password: c.String("password"),
SkipVerify: c.Bool("skip.verify"),
Recipients: c.StringSlice("recipients"),
RecipientsOnly: c.Bool("recipients.only"),
Subject: c.String("template.subject"),
Body: c.String("template.body"),
},
}
return plugin.Exec()
}

140
plugin.go Normal file
View file

@ -0,0 +1,140 @@
package main
import (
"crypto/tls"
log "github.com/Sirupsen/logrus"
"github.com/aymerick/douceur/inliner"
"github.com/drone/drone-go/template"
"github.com/jaytaylor/html2text"
"gopkg.in/gomail.v2"
)
type (
Repo struct {
Owner string
Name string
}
Author struct {
Name string
Email string
Avatar string
}
Build struct {
Tag string
Event string
Number int
Commit string
Ref string
Branch string
Author Author
Message string
Status string
Link string
Started int64
Created int64
}
Config struct {
From string
Host string
Port int
Username string
Password string
SkipVerify bool
Recipients []string
RecipientsOnly bool
Subject string
Body string
}
Job struct {
Started int64
}
Plugin struct {
Repo Repo
Build Build
Config Config
Job Job
}
)
// Exec will send emails over SMTP
func (p Plugin) Exec() error {
var dialer *gomail.Dialer
if !p.Config.RecipientsOnly {
p.Config.Recipients = append(p.Config.Recipients, p.Build.Author.Email)
}
if p.Config.Username == "" && p.Config.Password == "" {
dialer = &gomail.Dialer{Host: p.Config.Host, Port: p.Config.Port}
} else {
dialer = gomail.NewDialer(p.Config.Host, p.Config.Port, p.Config.Username, p.Config.Password)
}
if p.Config.SkipVerify {
dialer.TLSConfig = &tls.Config{InsecureSkipVerify: true}
}
closer, err := dialer.Dial()
if err != nil {
log.Errorf("Error while dialing SMTP server: %v", err)
return err
}
type Context struct {
Job Job
Repo Repo
Build Build
Config Config
}
ctx := Context{
Job: p.Job,
Repo: p.Repo,
Build: p.Build,
Config: p.Config,
}
// Render body in HTML and plain text
renderedBody, err := template.RenderTrim(p.Config.Body, ctx)
if err != nil {
log.Errorf("Could not render body template: %v", err)
return err
}
html, err := inliner.Inline(renderedBody)
if err != nil {
log.Errorf("Could not inline rendered body: %v", err)
return err
}
plainBody, err := html2text.FromString(html)
if err != nil {
log.Errorf("Could not convert html to text: %v", err)
return err
}
// Render subject
subject, err := template.RenderTrim(p.Config.Subject, ctx)
if err != nil {
log.Errorf("Could not render subject template: %v", err)
return err
}
// Send emails
message := gomail.NewMessage()
for _, recipient := range p.Config.Recipients {
message.SetHeader("From", p.Config.From)
message.SetAddressHeader("To", recipient, "")
message.SetHeader("Subject", subject)
message.AddAlternative("text/plain", plainBody)
message.AddAlternative("text/html", html)
if err := gomail.Send(closer, message); err != nil {
log.Errorf("Could not send email to %q: %v", recipient, err)
}
message.Reset()
}
return nil
}

129
sender.go
View file

@ -1,129 +0,0 @@
package main
import (
"crypto/tls"
"github.com/aymerick/douceur/inliner"
"github.com/drone/drone-go/drone"
"github.com/drone/drone-go/template"
"github.com/go-gomail/gomail"
"github.com/jaytaylor/html2text"
)
func Send(context *Context) error {
payload := &drone.Payload{
System: &context.System,
Repo: &context.Repo,
Build: &context.Build,
}
subject, plain, html, err := build(
payload,
context,
)
if err != nil {
return err
}
return send(
subject,
plain,
html,
context,
)
}
func build(payload *drone.Payload, context *Context) (string, string, string, error) {
subject, err := template.RenderTrim(
context.Vargs.Subject,
payload)
if err != nil {
return "", "", "", err
}
body, err := template.RenderTrim(
context.Vargs.Template,
payload,
)
if err != nil {
return "", "", "", err
}
html, err := inliner.Inline(body)
if err != nil {
return "", "", "", err
}
plain, err := html2text.FromString(
html,
)
if err != nil {
return "", "", "", err
}
return subject, plain, html, nil
}
func send(subject, plainBody, htmlBody string, c *Context) error {
if len(c.Vargs.Recipients) == 0 {
c.Vargs.Recipients = []string{
c.Build.Email,
}
}
m := gomail.NewMessage()
m.SetHeader(
"To",
c.Vargs.Recipients...,
)
m.SetHeader(
"From",
c.Vargs.From,
)
m.SetHeader(
"Subject",
subject,
)
m.AddAlternative(
"text/plain",
plainBody,
)
m.AddAlternative(
"text/html",
htmlBody,
)
var d *gomail.Dialer
if c.Vargs.Username == "" {
d = &gomail.Dialer{
Host: c.Vargs.Host,
Port: c.Vargs.Port,
SSL: c.Vargs.Port == 465,
}
} else {
d = gomail.NewDialer(
c.Vargs.Host,
c.Vargs.Port,
c.Vargs.Username,
c.Vargs.Password,
)
}
if c.Vargs.SkipVerify {
d.TLSConfig = &tls.Config{
InsecureSkipVerify: true,
}
}
return d.DialAndSend(m)
}

View file

@ -1,24 +0,0 @@
package main
import (
"github.com/drone/drone-go/drone"
)
type Params struct {
Recipients []string `json:"recipients"`
Host string `json:"host"`
Port int `json:"port"`
From string `json:"from"`
Username string `json:"username"`
Password string `json:"password"`
Subject string `json:"subject"`
Template string `json:"template"`
SkipVerify bool `json:"skip_verify"`
}
type Context struct {
System drone.System
Repo drone.Repo
Build drone.Build
Vargs Params
}

12
vendor/github.com/PuerkitoBio/goquery/LICENSE generated vendored Normal file
View file

@ -0,0 +1,12 @@
Copyright (c) 2012-2016, Martin Angers & Contributors
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
* Neither the name of the author nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

123
vendor/github.com/PuerkitoBio/goquery/README.md generated vendored Normal file
View file

@ -0,0 +1,123 @@
# goquery - a little like that j-thing, only in Go [![build status](https://secure.travis-ci.org/PuerkitoBio/goquery.png)](http://travis-ci.org/PuerkitoBio/goquery) [![GoDoc](https://godoc.org/github.com/PuerkitoBio/goquery?status.png)](http://godoc.org/github.com/PuerkitoBio/goquery)
goquery brings a syntax and a set of features similar to [jQuery][] to the [Go language][go]. It is based on Go's [net/html package][html] and the CSS Selector library [cascadia][]. Since the net/html parser returns nodes, and not a full-featured DOM tree, jQuery's stateful manipulation functions (like height(), css(), detach()) have been left off.
Also, because the net/html parser requires UTF-8 encoding, so does goquery: it is the caller's responsibility to ensure that the source document provides UTF-8 encoded HTML. See the [wiki][] for various options to do this.
Syntax-wise, it is as close as possible to jQuery, with the same function names when possible, and that warm and fuzzy chainable interface. jQuery being the ultra-popular library that it is, I felt that writing a similar HTML-manipulating library was better to follow its API than to start anew (in the same spirit as Go's `fmt` package), even though some of its methods are less than intuitive (looking at you, [index()][index]...).
## Installation
Please note that because of the net/html dependency, goquery requires Go1.1+.
$ go get github.com/PuerkitoBio/goquery
(optional) To run unit tests:
$ cd $GOPATH/src/github.com/PuerkitoBio/goquery
$ go test
(optional) To run benchmarks (warning: it runs for a few minutes):
$ cd $GOPATH/src/github.com/PuerkitoBio/goquery
$ go test -bench=".*"
## Changelog
**Note that goquery's API is now stable, and will not break.**
* **2016-12-29 (v1.0.2)** : Optimize allocations for `Selection.Text` (thanks to @radovskyb).
* **2016-08-28 (v1.0.1)** : Optimize performance for large documents.
* **2016-07-27 (v1.0.0)** : Tag version 1.0.0.
* **2016-06-15** : Invalid selector strings internally compile to a `Matcher` implementation that never matches any node (instead of a panic). So for example, `doc.Find("~")` returns an empty `*Selection` object.
* **2016-02-02** : Add `NodeName` utility function similar to the DOM's `nodeName` property. It returns the tag name of the first element in a selection, and other relevant values of non-element nodes (see godoc for details). Add `OuterHtml` utility function similar to the DOM's `outerHTML` property (named `OuterHtml` in small caps for consistency with the existing `Html` method on the `Selection`).
* **2015-04-20** : Add `AttrOr` helper method to return the attribute's value or a default value if absent. Thanks to [piotrkowalczuk][piotr].
* **2015-02-04** : Add more manipulation functions - Prepend* - thanks again to [Andrew Stone][thatguystone].
* **2014-11-28** : Add more manipulation functions - ReplaceWith*, Wrap* and Unwrap - thanks again to [Andrew Stone][thatguystone].
* **2014-11-07** : Add manipulation functions (thanks to [Andrew Stone][thatguystone]) and `*Matcher` functions, that receive compiled cascadia selectors instead of selector strings, thus avoiding potential panics thrown by goquery via `cascadia.MustCompile` calls. This results in better performance (selectors can be compiled once and reused) and more idiomatic error handling (you can handle cascadia's compilation errors, instead of recovering from panics, which had been bugging me for a long time). Note that the actual type expected is a `Matcher` interface, that `cascadia.Selector` implements. Other matcher implementations could be used.
* **2014-11-06** : Change import paths of net/html to golang.org/x/net/html (see https://groups.google.com/forum/#!topic/golang-nuts/eD8dh3T9yyA). Make sure to update your code to use the new import path too when you call goquery with `html.Node`s.
* **v0.3.2** : Add `NewDocumentFromReader()` (thanks jweir) which allows creating a goquery document from an io.Reader.
* **v0.3.1** : Add `NewDocumentFromResponse()` (thanks assassingj) which allows creating a goquery document from an http response.
* **v0.3.0** : Add `EachWithBreak()` which allows to break out of an `Each()` loop by returning false. This function was added instead of changing the existing `Each()` to avoid breaking compatibility.
* **v0.2.1** : Make go-getable, now that [go.net/html is Go1.0-compatible][gonet] (thanks to @matrixik for pointing this out).
* **v0.2.0** : Add support for negative indices in Slice(). **BREAKING CHANGE** `Document.Root` is removed, `Document` is now a `Selection` itself (a selection of one, the root element, just like `Document.Root` was before). Add jQuery's Closest() method.
* **v0.1.1** : Add benchmarks to use as baseline for refactorings, refactor Next...() and Prev...() methods to use the new html package's linked list features (Next/PrevSibling, FirstChild). Good performance boost (40+% in some cases).
* **v0.1.0** : Initial release.
## API
goquery exposes two structs, `Document` and `Selection`, and the `Matcher` interface. Unlike jQuery, which is loaded as part of a DOM document, and thus acts on its containing document, goquery doesn't know which HTML document to act upon. So it needs to be told, and that's what the `Document` type is for. It holds the root document node as the initial Selection value to manipulate.
jQuery often has many variants for the same function (no argument, a selector string argument, a jQuery object argument, a DOM element argument, ...). Instead of exposing the same features in goquery as a single method with variadic empty interface arguments, statically-typed signatures are used following this naming convention:
* When the jQuery equivalent can be called with no argument, it has the same name as jQuery for the no argument signature (e.g.: `Prev()`), and the version with a selector string argument is called `XxxFiltered()` (e.g.: `PrevFiltered()`)
* When the jQuery equivalent **requires** one argument, the same name as jQuery is used for the selector string version (e.g.: `Is()`)
* The signatures accepting a jQuery object as argument are defined in goquery as `XxxSelection()` and take a `*Selection` object as argument (e.g.: `FilterSelection()`)
* The signatures accepting a DOM element as argument in jQuery are defined in goquery as `XxxNodes()` and take a variadic argument of type `*html.Node` (e.g.: `FilterNodes()`)
* The signatures accepting a function as argument in jQuery are defined in goquery as `XxxFunction()` and take a function as argument (e.g.: `FilterFunction()`)
* The goquery methods that can be called with a selector string have a corresponding version that take a `Matcher` interface and are defined as `XxxMatcher()` (e.g.: `IsMatcher()`)
Utility functions that are not in jQuery but are useful in Go are implemented as functions (that take a `*Selection` as parameter), to avoid a potential naming clash on the `*Selection`'s methods (reserved for jQuery-equivalent behaviour).
The complete [godoc reference documentation can be found here][doc].
Please note that Cascadia's selectors do not necessarily match all supported selectors of jQuery (Sizzle). See the [cascadia project][cascadia] for details. Invalid selector strings compile to a `Matcher` that fails to match any node. Behaviour of the various functions that take a selector string as argument follows from that fact, e.g. (where `~` is an invalid selector string):
* `Find("~")` returns an empty selection because the selector string doesn't match anything.
* `Add("~")` returns a new selection that holds the same nodes as the original selection, because it didn't add any node (selector string didn't match anything).
* `ParentsFiltered("~")` returns an empty selection because the selector string doesn't match anything.
* `ParentsUntil("~")` returns all parents of the selection because the selector string didn't match any element to stop before the top element.
## Examples
See some tips and tricks in the [wiki][].
Adapted from example_test.go:
```Go
package main
import (
"fmt"
"log"
"github.com/PuerkitoBio/goquery"
)
func ExampleScrape() {
doc, err := goquery.NewDocument("http://metalsucks.net")
if err != nil {
log.Fatal(err)
}
// Find the review items
doc.Find(".sidebar-reviews article .content-block").Each(func(i int, s *goquery.Selection) {
// For each item found, get the band and title
band := s.Find("a").Text()
title := s.Find("i").Text()
fmt.Printf("Review %d: %s - %s\n", i, band, title)
})
}
func main() {
ExampleScrape()
}
```
## License
The [BSD 3-Clause license][bsd], the same as the [Go language][golic]. Cascadia's license is [here][caslic].
[jquery]: http://jquery.com/
[go]: http://golang.org/
[cascadia]: https://github.com/andybalholm/cascadia
[bsd]: http://opensource.org/licenses/BSD-3-Clause
[golic]: http://golang.org/LICENSE
[caslic]: https://github.com/andybalholm/cascadia/blob/master/LICENSE
[doc]: http://godoc.org/github.com/PuerkitoBio/goquery
[index]: http://api.jquery.com/index/
[gonet]: https://github.com/golang/net/
[html]: http://godoc.org/golang.org/x/net/html
[wiki]: https://github.com/PuerkitoBio/goquery/wiki/Tips-and-tricks
[thatguystone]: https://github.com/thatguystone
[piotr]: https://github.com/piotrkowalczuk

103
vendor/github.com/PuerkitoBio/goquery/array.go generated vendored Normal file
View file

@ -0,0 +1,103 @@
package goquery
import (
"golang.org/x/net/html"
)
// First reduces the set of matched elements to the first in the set.
// It returns a new Selection object, and an empty Selection object if the
// the selection is empty.
func (s *Selection) First() *Selection {
return s.Eq(0)
}
// Last reduces the set of matched elements to the last in the set.
// It returns a new Selection object, and an empty Selection object if
// the selection is empty.
func (s *Selection) Last() *Selection {
return s.Eq(-1)
}
// Eq reduces the set of matched elements to the one at the specified index.
// If a negative index is given, it counts backwards starting at the end of the
// set. It returns a new Selection object, and an empty Selection object if the
// index is invalid.
func (s *Selection) Eq(index int) *Selection {
if index < 0 {
index += len(s.Nodes)
}
if index >= len(s.Nodes) || index < 0 {
return newEmptySelection(s.document)
}
return s.Slice(index, index+1)
}
// Slice reduces the set of matched elements to a subset specified by a range
// of indices.
func (s *Selection) Slice(start, end int) *Selection {
if start < 0 {
start += len(s.Nodes)
}
if end < 0 {
end += len(s.Nodes)
}
return pushStack(s, s.Nodes[start:end])
}
// Get retrieves the underlying node at the specified index.
// Get without parameter is not implemented, since the node array is available
// on the Selection object.
func (s *Selection) Get(index int) *html.Node {
if index < 0 {
index += len(s.Nodes) // Negative index gets from the end
}
return s.Nodes[index]
}
// Index returns the position of the first element within the Selection object
// relative to its sibling elements.
func (s *Selection) Index() int {
if len(s.Nodes) > 0 {
return newSingleSelection(s.Nodes[0], s.document).PrevAll().Length()
}
return -1
}
// IndexSelector returns the position of the first element within the
// Selection object relative to the elements matched by the selector, or -1 if
// not found.
func (s *Selection) IndexSelector(selector string) int {
if len(s.Nodes) > 0 {
sel := s.document.Find(selector)
return indexInSlice(sel.Nodes, s.Nodes[0])
}
return -1
}
// IndexMatcher returns the position of the first element within the
// Selection object relative to the elements matched by the matcher, or -1 if
// not found.
func (s *Selection) IndexMatcher(m Matcher) int {
if len(s.Nodes) > 0 {
sel := s.document.FindMatcher(m)
return indexInSlice(sel.Nodes, s.Nodes[0])
}
return -1
}
// IndexOfNode returns the position of the specified node within the Selection
// object, or -1 if not found.
func (s *Selection) IndexOfNode(node *html.Node) int {
return indexInSlice(s.Nodes, node)
}
// IndexOfSelection returns the position of the first node in the specified
// Selection object within this Selection object, or -1 if not found.
func (s *Selection) IndexOfSelection(sel *Selection) int {
if sel != nil && len(sel.Nodes) > 0 {
return indexInSlice(s.Nodes, sel.Nodes[0])
}
return -1
}

123
vendor/github.com/PuerkitoBio/goquery/doc.go generated vendored Normal file
View file

@ -0,0 +1,123 @@
// Copyright (c) 2012-2016, Martin Angers & Contributors
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation and/or
// other materials provided with the distribution.
// * Neither the name of the author nor the names of its contributors may be used to
// endorse or promote products derived from this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS
// OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
// AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR
// CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
// WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY
// WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
/*
Package goquery implements features similar to jQuery, including the chainable
syntax, to manipulate and query an HTML document.
It brings a syntax and a set of features similar to jQuery to the Go language.
It is based on Go's net/html package and the CSS Selector library cascadia.
Since the net/html parser returns nodes, and not a full-featured DOM
tree, jQuery's stateful manipulation functions (like height(), css(), detach())
have been left off.
Also, because the net/html parser requires UTF-8 encoding, so does goquery: it is
the caller's responsibility to ensure that the source document provides UTF-8 encoded HTML.
See the repository's wiki for various options on how to do this.
Syntax-wise, it is as close as possible to jQuery, with the same method names when
possible, and that warm and fuzzy chainable interface. jQuery being the
ultra-popular library that it is, writing a similar HTML-manipulating
library was better to follow its API than to start anew (in the same spirit as
Go's fmt package), even though some of its methods are less than intuitive (looking
at you, index()...).
It is hosted on GitHub, along with additional documentation in the README.md
file: https://github.com/puerkitobio/goquery
Please note that because of the net/html dependency, goquery requires Go1.1+.
The various methods are split into files based on the category of behavior.
The three dots (...) indicate that various "overloads" are available.
* array.go : array-like positional manipulation of the selection.
- Eq()
- First()
- Get()
- Index...()
- Last()
- Slice()
* expand.go : methods that expand or augment the selection's set.
- Add...()
- AndSelf()
- Union(), which is an alias for AddSelection()
* filter.go : filtering methods, that reduce the selection's set.
- End()
- Filter...()
- Has...()
- Intersection(), which is an alias of FilterSelection()
- Not...()
* iteration.go : methods to loop over the selection's nodes.
- Each()
- EachWithBreak()
- Map()
* manipulation.go : methods for modifying the document
- After...()
- Append...()
- Before...()
- Clone()
- Empty()
- Prepend...()
- Remove...()
- ReplaceWith...()
- Unwrap()
- Wrap...()
- WrapAll...()
- WrapInner...()
* property.go : methods that inspect and get the node's properties values.
- Attr*(), RemoveAttr(), SetAttr()
- AddClass(), HasClass(), RemoveClass(), ToggleClass()
- Html()
- Length()
- Size(), which is an alias for Length()
- Text()
* query.go : methods that query, or reflect, a node's identity.
- Contains()
- Is...()
* traversal.go : methods to traverse the HTML document tree.
- Children...()
- Contents()
- Find...()
- Next...()
- Parent[s]...()
- Prev...()
- Siblings...()
* type.go : definition of the types exposed by goquery.
- Document
- Selection
- Matcher
* utilities.go : definition of helper functions (and not methods on a *Selection)
that are not part of jQuery, but are useful to goquery.
- NodeName
- OuterHtml
*/
package goquery

46
vendor/github.com/PuerkitoBio/goquery/expand.go generated vendored Normal file
View file

@ -0,0 +1,46 @@
package goquery
import "golang.org/x/net/html"
// Add adds the selector string's matching nodes to those in the current
// selection and returns a new Selection object.
// The selector string is run in the context of the document of the current
// Selection object.
func (s *Selection) Add(selector string) *Selection {
return s.AddNodes(findWithMatcher([]*html.Node{s.document.rootNode}, compileMatcher(selector))...)
}
// AddMatcher adds the matcher's matching nodes to those in the current
// selection and returns a new Selection object.
// The matcher is run in the context of the document of the current
// Selection object.
func (s *Selection) AddMatcher(m Matcher) *Selection {
return s.AddNodes(findWithMatcher([]*html.Node{s.document.rootNode}, m)...)
}
// AddSelection adds the specified Selection object's nodes to those in the
// current selection and returns a new Selection object.
func (s *Selection) AddSelection(sel *Selection) *Selection {
if sel == nil {
return s.AddNodes()
}
return s.AddNodes(sel.Nodes...)
}
// Union is an alias for AddSelection.
func (s *Selection) Union(sel *Selection) *Selection {
return s.AddSelection(sel)
}
// AddNodes adds the specified nodes to those in the
// current selection and returns a new Selection object.
func (s *Selection) AddNodes(nodes ...*html.Node) *Selection {
return pushStack(s, appendWithoutDuplicates(s.Nodes, nodes, nil))
}
// AndSelf adds the previous set of elements on the stack to the current set.
// It returns a new Selection object containing the current Selection combined
// with the previous one.
func (s *Selection) AndSelf() *Selection {
return s.AddSelection(s.prevSel)
}

163
vendor/github.com/PuerkitoBio/goquery/filter.go generated vendored Normal file
View file

@ -0,0 +1,163 @@
package goquery
import "golang.org/x/net/html"
// Filter reduces the set of matched elements to those that match the selector string.
// It returns a new Selection object for this subset of matching elements.
func (s *Selection) Filter(selector string) *Selection {
return s.FilterMatcher(compileMatcher(selector))
}
// FilterMatcher reduces the set of matched elements to those that match
// the given matcher. It returns a new Selection object for this subset
// of matching elements.
func (s *Selection) FilterMatcher(m Matcher) *Selection {
return pushStack(s, winnow(s, m, true))
}
// Not removes elements from the Selection that match the selector string.
// It returns a new Selection object with the matching elements removed.
func (s *Selection) Not(selector string) *Selection {
return s.NotMatcher(compileMatcher(selector))
}
// NotMatcher removes elements from the Selection that match the given matcher.
// It returns a new Selection object with the matching elements removed.
func (s *Selection) NotMatcher(m Matcher) *Selection {
return pushStack(s, winnow(s, m, false))
}
// FilterFunction reduces the set of matched elements to those that pass the function's test.
// It returns a new Selection object for this subset of elements.
func (s *Selection) FilterFunction(f func(int, *Selection) bool) *Selection {
return pushStack(s, winnowFunction(s, f, true))
}
// NotFunction removes elements from the Selection that pass the function's test.
// It returns a new Selection object with the matching elements removed.
func (s *Selection) NotFunction(f func(int, *Selection) bool) *Selection {
return pushStack(s, winnowFunction(s, f, false))
}
// FilterNodes reduces the set of matched elements to those that match the specified nodes.
// It returns a new Selection object for this subset of elements.
func (s *Selection) FilterNodes(nodes ...*html.Node) *Selection {
return pushStack(s, winnowNodes(s, nodes, true))
}
// NotNodes removes elements from the Selection that match the specified nodes.
// It returns a new Selection object with the matching elements removed.
func (s *Selection) NotNodes(nodes ...*html.Node) *Selection {
return pushStack(s, winnowNodes(s, nodes, false))
}
// FilterSelection reduces the set of matched elements to those that match a
// node in the specified Selection object.
// It returns a new Selection object for this subset of elements.
func (s *Selection) FilterSelection(sel *Selection) *Selection {
if sel == nil {
return pushStack(s, winnowNodes(s, nil, true))
}
return pushStack(s, winnowNodes(s, sel.Nodes, true))
}
// NotSelection removes elements from the Selection that match a node in the specified
// Selection object. It returns a new Selection object with the matching elements removed.
func (s *Selection) NotSelection(sel *Selection) *Selection {
if sel == nil {
return pushStack(s, winnowNodes(s, nil, false))
}
return pushStack(s, winnowNodes(s, sel.Nodes, false))
}
// Intersection is an alias for FilterSelection.
func (s *Selection) Intersection(sel *Selection) *Selection {
return s.FilterSelection(sel)
}
// Has reduces the set of matched elements to those that have a descendant
// that matches the selector.
// It returns a new Selection object with the matching elements.
func (s *Selection) Has(selector string) *Selection {
return s.HasSelection(s.document.Find(selector))
}
// HasMatcher reduces the set of matched elements to those that have a descendant
// that matches the matcher.
// It returns a new Selection object with the matching elements.
func (s *Selection) HasMatcher(m Matcher) *Selection {
return s.HasSelection(s.document.FindMatcher(m))
}
// HasNodes reduces the set of matched elements to those that have a
// descendant that matches one of the nodes.
// It returns a new Selection object with the matching elements.
func (s *Selection) HasNodes(nodes ...*html.Node) *Selection {
return s.FilterFunction(func(_ int, sel *Selection) bool {
// Add all nodes that contain one of the specified nodes
for _, n := range nodes {
if sel.Contains(n) {
return true
}
}
return false
})
}
// HasSelection reduces the set of matched elements to those that have a
// descendant that matches one of the nodes of the specified Selection object.
// It returns a new Selection object with the matching elements.
func (s *Selection) HasSelection(sel *Selection) *Selection {
if sel == nil {
return s.HasNodes()
}
return s.HasNodes(sel.Nodes...)
}
// End ends the most recent filtering operation in the current chain and
// returns the set of matched elements to its previous state.
func (s *Selection) End() *Selection {
if s.prevSel != nil {
return s.prevSel
}
return newEmptySelection(s.document)
}
// Filter based on the matcher, and the indicator to keep (Filter) or
// to get rid of (Not) the matching elements.
func winnow(sel *Selection, m Matcher, keep bool) []*html.Node {
// Optimize if keep is requested
if keep {
return m.Filter(sel.Nodes)
}
// Use grep
return grep(sel, func(i int, s *Selection) bool {
return !m.Match(s.Get(0))
})
}
// Filter based on an array of nodes, and the indicator to keep (Filter) or
// to get rid of (Not) the matching elements.
func winnowNodes(sel *Selection, nodes []*html.Node, keep bool) []*html.Node {
if len(nodes)+len(sel.Nodes) < minNodesForSet {
return grep(sel, func(i int, s *Selection) bool {
return isInSlice(nodes, s.Get(0)) == keep
})
}
set := make(map[*html.Node]bool)
for _, n := range nodes {
set[n] = true
}
return grep(sel, func(i int, s *Selection) bool {
return set[s.Get(0)] == keep
})
}
// Filter based on a function test, and the indicator to keep (Filter) or
// to get rid of (Not) the matching elements.
func winnowFunction(sel *Selection, f func(int, *Selection) bool, keep bool) []*html.Node {
return grep(sel, func(i int, s *Selection) bool {
return f(i, s) == keep
})
}

39
vendor/github.com/PuerkitoBio/goquery/iteration.go generated vendored Normal file
View file

@ -0,0 +1,39 @@
package goquery
// Each iterates over a Selection object, executing a function for each
// matched element. It returns the current Selection object. The function
// f is called for each element in the selection with the index of the
// element in that selection starting at 0, and a *Selection that contains
// only that element.
func (s *Selection) Each(f func(int, *Selection)) *Selection {
for i, n := range s.Nodes {
f(i, newSingleSelection(n, s.document))
}
return s
}
// EachWithBreak iterates over a Selection object, executing a function for each
// matched element. It is identical to Each except that it is possible to break
// out of the loop by returning false in the callback function. It returns the
// current Selection object.
func (s *Selection) EachWithBreak(f func(int, *Selection) bool) *Selection {
for i, n := range s.Nodes {
if !f(i, newSingleSelection(n, s.document)) {
return s
}
}
return s
}
// Map passes each element in the current matched set through a function,
// producing a slice of string holding the returned values. The function
// f is called for each element in the selection with the index of the
// element in that selection starting at 0, and a *Selection that contains
// only that element.
func (s *Selection) Map(f func(int, *Selection) string) (result []string) {
for i, n := range s.Nodes {
result = append(result, f(i, newSingleSelection(n, s.document)))
}
return result
}

550
vendor/github.com/PuerkitoBio/goquery/manipulation.go generated vendored Normal file
View file

@ -0,0 +1,550 @@
package goquery
import (
"strings"
"golang.org/x/net/html"
)
// After applies the selector from the root document and inserts the matched elements
// after the elements in the set of matched elements.
//
// If one of the matched elements in the selection is not currently in the
// document, it's impossible to insert nodes after it, so it will be ignored.
//
// This follows the same rules as Selection.Append.
func (s *Selection) After(selector string) *Selection {
return s.AfterMatcher(compileMatcher(selector))
}
// AfterMatcher applies the matcher from the root document and inserts the matched elements
// after the elements in the set of matched elements.
//
// If one of the matched elements in the selection is not currently in the
// document, it's impossible to insert nodes after it, so it will be ignored.
//
// This follows the same rules as Selection.Append.
func (s *Selection) AfterMatcher(m Matcher) *Selection {
return s.AfterNodes(m.MatchAll(s.document.rootNode)...)
}
// AfterSelection inserts the elements in the selection after each element in the set of matched
// elements.
//
// This follows the same rules as Selection.Append.
func (s *Selection) AfterSelection(sel *Selection) *Selection {
return s.AfterNodes(sel.Nodes...)
}
// AfterHtml parses the html and inserts it after the set of matched elements.
//
// This follows the same rules as Selection.Append.
func (s *Selection) AfterHtml(html string) *Selection {
return s.AfterNodes(parseHtml(html)...)
}
// AfterNodes inserts the nodes after each element in the set of matched elements.
//
// This follows the same rules as Selection.Append.
func (s *Selection) AfterNodes(ns ...*html.Node) *Selection {
return s.manipulateNodes(ns, true, func(sn *html.Node, n *html.Node) {
if sn.Parent != nil {
sn.Parent.InsertBefore(n, sn.NextSibling)
}
})
}
// Append appends the elements specified by the selector to the end of each element
// in the set of matched elements, following those rules:
//
// 1) The selector is applied to the root document.
//
// 2) Elements that are part of the document will be moved to the new location.
//
// 3) If there are multiple locations to append to, cloned nodes will be
// appended to all target locations except the last one, which will be moved
// as noted in (2).
func (s *Selection) Append(selector string) *Selection {
return s.AppendMatcher(compileMatcher(selector))
}
// AppendMatcher appends the elements specified by the matcher to the end of each element
// in the set of matched elements.
//
// This follows the same rules as Selection.Append.
func (s *Selection) AppendMatcher(m Matcher) *Selection {
return s.AppendNodes(m.MatchAll(s.document.rootNode)...)
}
// AppendSelection appends the elements in the selection to the end of each element
// in the set of matched elements.
//
// This follows the same rules as Selection.Append.
func (s *Selection) AppendSelection(sel *Selection) *Selection {
return s.AppendNodes(sel.Nodes...)
}
// AppendHtml parses the html and appends it to the set of matched elements.
func (s *Selection) AppendHtml(html string) *Selection {
return s.AppendNodes(parseHtml(html)...)
}
// AppendNodes appends the specified nodes to each node in the set of matched elements.
//
// This follows the same rules as Selection.Append.
func (s *Selection) AppendNodes(ns ...*html.Node) *Selection {
return s.manipulateNodes(ns, false, func(sn *html.Node, n *html.Node) {
sn.AppendChild(n)
})
}
// Before inserts the matched elements before each element in the set of matched elements.
//
// This follows the same rules as Selection.Append.
func (s *Selection) Before(selector string) *Selection {
return s.BeforeMatcher(compileMatcher(selector))
}
// BeforeMatcher inserts the matched elements before each element in the set of matched elements.
//
// This follows the same rules as Selection.Append.
func (s *Selection) BeforeMatcher(m Matcher) *Selection {
return s.BeforeNodes(m.MatchAll(s.document.rootNode)...)
}
// BeforeSelection inserts the elements in the selection before each element in the set of matched
// elements.
//
// This follows the same rules as Selection.Append.
func (s *Selection) BeforeSelection(sel *Selection) *Selection {
return s.BeforeNodes(sel.Nodes...)
}
// BeforeHtml parses the html and inserts it before the set of matched elements.
//
// This follows the same rules as Selection.Append.
func (s *Selection) BeforeHtml(html string) *Selection {
return s.BeforeNodes(parseHtml(html)...)
}
// BeforeNodes inserts the nodes before each element in the set of matched elements.
//
// This follows the same rules as Selection.Append.
func (s *Selection) BeforeNodes(ns ...*html.Node) *Selection {
return s.manipulateNodes(ns, false, func(sn *html.Node, n *html.Node) {
if sn.Parent != nil {
sn.Parent.InsertBefore(n, sn)
}
})
}
// Clone creates a deep copy of the set of matched nodes. The new nodes will not be
// attached to the document.
func (s *Selection) Clone() *Selection {
ns := newEmptySelection(s.document)
ns.Nodes = cloneNodes(s.Nodes)
return ns
}
// Empty removes all children nodes from the set of matched elements.
// It returns the children nodes in a new Selection.
func (s *Selection) Empty() *Selection {
var nodes []*html.Node
for _, n := range s.Nodes {
for c := n.FirstChild; c != nil; c = n.FirstChild {
n.RemoveChild(c)
nodes = append(nodes, c)
}
}
return pushStack(s, nodes)
}
// Prepend prepends the elements specified by the selector to each element in
// the set of matched elements, following the same rules as Append.
func (s *Selection) Prepend(selector string) *Selection {
return s.PrependMatcher(compileMatcher(selector))
}
// PrependMatcher prepends the elements specified by the matcher to each
// element in the set of matched elements.
//
// This follows the same rules as Selection.Append.
func (s *Selection) PrependMatcher(m Matcher) *Selection {
return s.PrependNodes(m.MatchAll(s.document.rootNode)...)
}
// PrependSelection prepends the elements in the selection to each element in
// the set of matched elements.
//
// This follows the same rules as Selection.Append.
func (s *Selection) PrependSelection(sel *Selection) *Selection {
return s.PrependNodes(sel.Nodes...)
}
// PrependHtml parses the html and prepends it to the set of matched elements.
func (s *Selection) PrependHtml(html string) *Selection {
return s.PrependNodes(parseHtml(html)...)
}
// PrependNodes prepends the specified nodes to each node in the set of
// matched elements.
//
// This follows the same rules as Selection.Append.
func (s *Selection) PrependNodes(ns ...*html.Node) *Selection {
return s.manipulateNodes(ns, true, func(sn *html.Node, n *html.Node) {
// sn.FirstChild may be nil, in which case this functions like
// sn.AppendChild()
sn.InsertBefore(n, sn.FirstChild)
})
}
// Remove removes the set of matched elements from the document.
// It returns the same selection, now consisting of nodes not in the document.
func (s *Selection) Remove() *Selection {
for _, n := range s.Nodes {
if n.Parent != nil {
n.Parent.RemoveChild(n)
}
}
return s
}
// RemoveFiltered removes the set of matched elements by selector.
// It returns the Selection of removed nodes.
func (s *Selection) RemoveFiltered(selector string) *Selection {
return s.RemoveMatcher(compileMatcher(selector))
}
// RemoveMatcher removes the set of matched elements.
// It returns the Selection of removed nodes.
func (s *Selection) RemoveMatcher(m Matcher) *Selection {
return s.FilterMatcher(m).Remove()
}
// ReplaceWith replaces each element in the set of matched elements with the
// nodes matched by the given selector.
// It returns the removed elements.
//
// This follows the same rules as Selection.Append.
func (s *Selection) ReplaceWith(selector string) *Selection {
return s.ReplaceWithMatcher(compileMatcher(selector))
}
// ReplaceWithMatcher replaces each element in the set of matched elements with
// the nodes matched by the given Matcher.
// It returns the removed elements.
//
// This follows the same rules as Selection.Append.
func (s *Selection) ReplaceWithMatcher(m Matcher) *Selection {
return s.ReplaceWithNodes(m.MatchAll(s.document.rootNode)...)
}
// ReplaceWithSelection replaces each element in the set of matched elements with
// the nodes from the given Selection.
// It returns the removed elements.
//
// This follows the same rules as Selection.Append.
func (s *Selection) ReplaceWithSelection(sel *Selection) *Selection {
return s.ReplaceWithNodes(sel.Nodes...)
}
// ReplaceWithHtml replaces each element in the set of matched elements with
// the parsed HTML.
// It returns the removed elements.
//
// This follows the same rules as Selection.Append.
func (s *Selection) ReplaceWithHtml(html string) *Selection {
return s.ReplaceWithNodes(parseHtml(html)...)
}
// ReplaceWithNodes replaces each element in the set of matched elements with
// the given nodes.
// It returns the removed elements.
//
// This follows the same rules as Selection.Append.
func (s *Selection) ReplaceWithNodes(ns ...*html.Node) *Selection {
s.AfterNodes(ns...)
return s.Remove()
}
// Unwrap removes the parents of the set of matched elements, leaving the matched
// elements (and their siblings, if any) in their place.
// It returns the original selection.
func (s *Selection) Unwrap() *Selection {
s.Parent().Each(func(i int, ss *Selection) {
// For some reason, jquery allows unwrap to remove the <head> element, so
// allowing it here too. Same for <html>. Why it allows those elements to
// be unwrapped while not allowing body is a mystery to me.
if ss.Nodes[0].Data != "body" {
ss.ReplaceWithSelection(ss.Contents())
}
})
return s
}
// Wrap wraps each element in the set of matched elements inside the first
// element matched by the given selector. The matched child is cloned before
// being inserted into the document.
//
// It returns the original set of elements.
func (s *Selection) Wrap(selector string) *Selection {
return s.WrapMatcher(compileMatcher(selector))
}
// WrapMatcher wraps each element in the set of matched elements inside the
// first element matched by the given matcher. The matched child is cloned
// before being inserted into the document.
//
// It returns the original set of elements.
func (s *Selection) WrapMatcher(m Matcher) *Selection {
return s.wrapNodes(m.MatchAll(s.document.rootNode)...)
}
// WrapSelection wraps each element in the set of matched elements inside the
// first element in the given Selection. The element is cloned before being
// inserted into the document.
//
// It returns the original set of elements.
func (s *Selection) WrapSelection(sel *Selection) *Selection {
return s.wrapNodes(sel.Nodes...)
}
// WrapHtml wraps each element in the set of matched elements inside the inner-
// most child of the given HTML.
//
// It returns the original set of elements.
func (s *Selection) WrapHtml(html string) *Selection {
return s.wrapNodes(parseHtml(html)...)
}
// WrapNode wraps each element in the set of matched elements inside the inner-
// most child of the given node. The given node is copied before being inserted
// into the document.
//
// It returns the original set of elements.
func (s *Selection) WrapNode(n *html.Node) *Selection {
return s.wrapNodes(n)
}
func (s *Selection) wrapNodes(ns ...*html.Node) *Selection {
s.Each(func(i int, ss *Selection) {
ss.wrapAllNodes(ns...)
})
return s
}
// WrapAll wraps a single HTML structure, matched by the given selector, around
// all elements in the set of matched elements. The matched child is cloned
// before being inserted into the document.
//
// It returns the original set of elements.
func (s *Selection) WrapAll(selector string) *Selection {
return s.WrapAllMatcher(compileMatcher(selector))
}
// WrapAllMatcher wraps a single HTML structure, matched by the given Matcher,
// around all elements in the set of matched elements. The matched child is
// cloned before being inserted into the document.
//
// It returns the original set of elements.
func (s *Selection) WrapAllMatcher(m Matcher) *Selection {
return s.wrapAllNodes(m.MatchAll(s.document.rootNode)...)
}
// WrapAllSelection wraps a single HTML structure, the first node of the given
// Selection, around all elements in the set of matched elements. The matched
// child is cloned before being inserted into the document.
//
// It returns the original set of elements.
func (s *Selection) WrapAllSelection(sel *Selection) *Selection {
return s.wrapAllNodes(sel.Nodes...)
}
// WrapAllHtml wraps the given HTML structure around all elements in the set of
// matched elements. The matched child is cloned before being inserted into the
// document.
//
// It returns the original set of elements.
func (s *Selection) WrapAllHtml(html string) *Selection {
return s.wrapAllNodes(parseHtml(html)...)
}
func (s *Selection) wrapAllNodes(ns ...*html.Node) *Selection {
if len(ns) > 0 {
return s.WrapAllNode(ns[0])
}
return s
}
// WrapAllNode wraps the given node around the first element in the Selection,
// making all other nodes in the Selection children of the given node. The node
// is cloned before being inserted into the document.
//
// It returns the original set of elements.
func (s *Selection) WrapAllNode(n *html.Node) *Selection {
if s.Size() == 0 {
return s
}
wrap := cloneNode(n)
first := s.Nodes[0]
if first.Parent != nil {
first.Parent.InsertBefore(wrap, first)
first.Parent.RemoveChild(first)
}
for c := getFirstChildEl(wrap); c != nil; c = getFirstChildEl(wrap) {
wrap = c
}
newSingleSelection(wrap, s.document).AppendSelection(s)
return s
}
// WrapInner wraps an HTML structure, matched by the given selector, around the
// content of element in the set of matched elements. The matched child is
// cloned before being inserted into the document.
//
// It returns the original set of elements.
func (s *Selection) WrapInner(selector string) *Selection {
return s.WrapInnerMatcher(compileMatcher(selector))
}
// WrapInnerMatcher wraps an HTML structure, matched by the given selector,
// around the content of element in the set of matched elements. The matched
// child is cloned before being inserted into the document.
//
// It returns the original set of elements.
func (s *Selection) WrapInnerMatcher(m Matcher) *Selection {
return s.wrapInnerNodes(m.MatchAll(s.document.rootNode)...)
}
// WrapInnerSelection wraps an HTML structure, matched by the given selector,
// around the content of element in the set of matched elements. The matched
// child is cloned before being inserted into the document.
//
// It returns the original set of elements.
func (s *Selection) WrapInnerSelection(sel *Selection) *Selection {
return s.wrapInnerNodes(sel.Nodes...)
}
// WrapInnerHtml wraps an HTML structure, matched by the given selector, around
// the content of element in the set of matched elements. The matched child is
// cloned before being inserted into the document.
//
// It returns the original set of elements.
func (s *Selection) WrapInnerHtml(html string) *Selection {
return s.wrapInnerNodes(parseHtml(html)...)
}
// WrapInnerNode wraps an HTML structure, matched by the given selector, around
// the content of element in the set of matched elements. The matched child is
// cloned before being inserted into the document.
//
// It returns the original set of elements.
func (s *Selection) WrapInnerNode(n *html.Node) *Selection {
return s.wrapInnerNodes(n)
}
func (s *Selection) wrapInnerNodes(ns ...*html.Node) *Selection {
if len(ns) == 0 {
return s
}
s.Each(func(i int, s *Selection) {
contents := s.Contents()
if contents.Size() > 0 {
contents.wrapAllNodes(ns...)
} else {
s.AppendNodes(cloneNode(ns[0]))
}
})
return s
}
func parseHtml(h string) []*html.Node {
// Errors are only returned when the io.Reader returns any error besides
// EOF, but strings.Reader never will
nodes, err := html.ParseFragment(strings.NewReader(h), &html.Node{Type: html.ElementNode})
if err != nil {
panic("goquery: failed to parse HTML: " + err.Error())
}
return nodes
}
// Get the first child that is an ElementNode
func getFirstChildEl(n *html.Node) *html.Node {
c := n.FirstChild
for c != nil && c.Type != html.ElementNode {
c = c.NextSibling
}
return c
}
// Deep copy a slice of nodes.
func cloneNodes(ns []*html.Node) []*html.Node {
cns := make([]*html.Node, 0, len(ns))
for _, n := range ns {
cns = append(cns, cloneNode(n))
}
return cns
}
// Deep copy a node. The new node has clones of all the original node's
// children but none of its parents or siblings.
func cloneNode(n *html.Node) *html.Node {
nn := &html.Node{
Type: n.Type,
DataAtom: n.DataAtom,
Data: n.Data,
Attr: make([]html.Attribute, len(n.Attr)),
}
copy(nn.Attr, n.Attr)
for c := n.FirstChild; c != nil; c = c.NextSibling {
nn.AppendChild(cloneNode(c))
}
return nn
}
func (s *Selection) manipulateNodes(ns []*html.Node, reverse bool,
f func(sn *html.Node, n *html.Node)) *Selection {
lasti := s.Size() - 1
// net.Html doesn't provide document fragments for insertion, so to get
// things in the correct order with After() and Prepend(), the callback
// needs to be called on the reverse of the nodes.
if reverse {
for i, j := 0, len(ns)-1; i < j; i, j = i+1, j-1 {
ns[i], ns[j] = ns[j], ns[i]
}
}
for i, sn := range s.Nodes {
for _, n := range ns {
if i != lasti {
f(sn, cloneNode(n))
} else {
if n.Parent != nil {
n.Parent.RemoveChild(n)
}
f(sn, n)
}
}
}
return s
}

275
vendor/github.com/PuerkitoBio/goquery/property.go generated vendored Normal file
View file

@ -0,0 +1,275 @@
package goquery
import (
"bytes"
"regexp"
"strings"
"golang.org/x/net/html"
)
var rxClassTrim = regexp.MustCompile("[\t\r\n]")
// Attr gets the specified attribute's value for the first element in the
// Selection. To get the value for each element individually, use a looping
// construct such as Each or Map method.
func (s *Selection) Attr(attrName string) (val string, exists bool) {
if len(s.Nodes) == 0 {
return
}
return getAttributeValue(attrName, s.Nodes[0])
}
// AttrOr works like Attr but returns default value if attribute is not present.
func (s *Selection) AttrOr(attrName, defaultValue string) string {
if len(s.Nodes) == 0 {
return defaultValue
}
val, exists := getAttributeValue(attrName, s.Nodes[0])
if !exists {
return defaultValue
}
return val
}
// RemoveAttr removes the named attribute from each element in the set of matched elements.
func (s *Selection) RemoveAttr(attrName string) *Selection {
for _, n := range s.Nodes {
removeAttr(n, attrName)
}
return s
}
// SetAttr sets the given attribute on each element in the set of matched elements.
func (s *Selection) SetAttr(attrName, val string) *Selection {
for _, n := range s.Nodes {
attr := getAttributePtr(attrName, n)
if attr == nil {
n.Attr = append(n.Attr, html.Attribute{Key: attrName, Val: val})
} else {
attr.Val = val
}
}
return s
}
// Text gets the combined text contents of each element in the set of matched
// elements, including their descendants.
func (s *Selection) Text() string {
var buf bytes.Buffer
// Slightly optimized vs calling Each: no single selection object created
var f func(*html.Node)
f = func(n *html.Node) {
if n.Type == html.TextNode {
// Keep newlines and spaces, like jQuery
buf.WriteString(n.Data)
}
if n.FirstChild != nil {
for c := n.FirstChild; c != nil; c = c.NextSibling {
f(c)
}
}
}
for _, n := range s.Nodes {
f(n)
}
return buf.String()
}
// Size is an alias for Length.
func (s *Selection) Size() int {
return s.Length()
}
// Length returns the number of elements in the Selection object.
func (s *Selection) Length() int {
return len(s.Nodes)
}
// Html gets the HTML contents of the first element in the set of matched
// elements. It includes text and comment nodes.
func (s *Selection) Html() (ret string, e error) {
// Since there is no .innerHtml, the HTML content must be re-created from
// the nodes using html.Render.
var buf bytes.Buffer
if len(s.Nodes) > 0 {
for c := s.Nodes[0].FirstChild; c != nil; c = c.NextSibling {
e = html.Render(&buf, c)
if e != nil {
return
}
}
ret = buf.String()
}
return
}
// AddClass adds the given class(es) to each element in the set of matched elements.
// Multiple class names can be specified, separated by a space or via multiple arguments.
func (s *Selection) AddClass(class ...string) *Selection {
classStr := strings.TrimSpace(strings.Join(class, " "))
if classStr == "" {
return s
}
tcls := getClassesSlice(classStr)
for _, n := range s.Nodes {
curClasses, attr := getClassesAndAttr(n, true)
for _, newClass := range tcls {
if !strings.Contains(curClasses, " "+newClass+" ") {
curClasses += newClass + " "
}
}
setClasses(n, attr, curClasses)
}
return s
}
// HasClass determines whether any of the matched elements are assigned the
// given class.
func (s *Selection) HasClass(class string) bool {
class = " " + class + " "
for _, n := range s.Nodes {
classes, _ := getClassesAndAttr(n, false)
if strings.Contains(classes, class) {
return true
}
}
return false
}
// RemoveClass removes the given class(es) from each element in the set of matched elements.
// Multiple class names can be specified, separated by a space or via multiple arguments.
// If no class name is provided, all classes are removed.
func (s *Selection) RemoveClass(class ...string) *Selection {
var rclasses []string
classStr := strings.TrimSpace(strings.Join(class, " "))
remove := classStr == ""
if !remove {
rclasses = getClassesSlice(classStr)
}
for _, n := range s.Nodes {
if remove {
removeAttr(n, "class")
} else {
classes, attr := getClassesAndAttr(n, true)
for _, rcl := range rclasses {
classes = strings.Replace(classes, " "+rcl+" ", " ", -1)
}
setClasses(n, attr, classes)
}
}
return s
}
// ToggleClass adds or removes the given class(es) for each element in the set of matched elements.
// Multiple class names can be specified, separated by a space or via multiple arguments.
func (s *Selection) ToggleClass(class ...string) *Selection {
classStr := strings.TrimSpace(strings.Join(class, " "))
if classStr == "" {
return s
}
tcls := getClassesSlice(classStr)
for _, n := range s.Nodes {
classes, attr := getClassesAndAttr(n, true)
for _, tcl := range tcls {
if strings.Contains(classes, " "+tcl+" ") {
classes = strings.Replace(classes, " "+tcl+" ", " ", -1)
} else {
classes += tcl + " "
}
}
setClasses(n, attr, classes)
}
return s
}
func getAttributePtr(attrName string, n *html.Node) *html.Attribute {
if n == nil {
return nil
}
for i, a := range n.Attr {
if a.Key == attrName {
return &n.Attr[i]
}
}
return nil
}
// Private function to get the specified attribute's value from a node.
func getAttributeValue(attrName string, n *html.Node) (val string, exists bool) {
if a := getAttributePtr(attrName, n); a != nil {
val = a.Val
exists = true
}
return
}
// Get and normalize the "class" attribute from the node.
func getClassesAndAttr(n *html.Node, create bool) (classes string, attr *html.Attribute) {
// Applies only to element nodes
if n.Type == html.ElementNode {
attr = getAttributePtr("class", n)
if attr == nil && create {
n.Attr = append(n.Attr, html.Attribute{
Key: "class",
Val: "",
})
attr = &n.Attr[len(n.Attr)-1]
}
}
if attr == nil {
classes = " "
} else {
classes = rxClassTrim.ReplaceAllString(" "+attr.Val+" ", " ")
}
return
}
func getClassesSlice(classes string) []string {
return strings.Split(rxClassTrim.ReplaceAllString(" "+classes+" ", " "), " ")
}
func removeAttr(n *html.Node, attrName string) {
for i, a := range n.Attr {
if a.Key == attrName {
n.Attr[i], n.Attr[len(n.Attr)-1], n.Attr =
n.Attr[len(n.Attr)-1], html.Attribute{}, n.Attr[:len(n.Attr)-1]
return
}
}
}
func setClasses(n *html.Node, attr *html.Attribute, classes string) {
classes = strings.TrimSpace(classes)
if classes == "" {
removeAttr(n, "class")
return
}
attr.Val = classes
}

53
vendor/github.com/PuerkitoBio/goquery/query.go generated vendored Normal file
View file

@ -0,0 +1,53 @@
package goquery
import "golang.org/x/net/html"
// Is checks the current matched set of elements against a selector and
// returns true if at least one of these elements matches.
func (s *Selection) Is(selector string) bool {
if len(s.Nodes) > 0 {
return s.IsMatcher(compileMatcher(selector))
}
return false
}
// IsMatcher checks the current matched set of elements against a matcher and
// returns true if at least one of these elements matches.
func (s *Selection) IsMatcher(m Matcher) bool {
if len(s.Nodes) > 0 {
if len(s.Nodes) == 1 {
return m.Match(s.Nodes[0])
}
return len(m.Filter(s.Nodes)) > 0
}
return false
}
// IsFunction checks the current matched set of elements against a predicate and
// returns true if at least one of these elements matches.
func (s *Selection) IsFunction(f func(int, *Selection) bool) bool {
return s.FilterFunction(f).Length() > 0
}
// IsSelection checks the current matched set of elements against a Selection object
// and returns true if at least one of these elements matches.
func (s *Selection) IsSelection(sel *Selection) bool {
return s.FilterSelection(sel).Length() > 0
}
// IsNodes checks the current matched set of elements against the specified nodes
// and returns true if at least one of these elements matches.
func (s *Selection) IsNodes(nodes ...*html.Node) bool {
return s.FilterNodes(nodes...).Length() > 0
}
// Contains returns true if the specified Node is within,
// at any depth, one of the nodes in the Selection object.
// It is NOT inclusive, to behave like jQuery's implementation, and
// unlike Javascript's .contains, so if the contained
// node is itself in the selection, it returns false.
func (s *Selection) Contains(n *html.Node) bool {
return sliceContains(s.Nodes, n)
}

698
vendor/github.com/PuerkitoBio/goquery/traversal.go generated vendored Normal file
View file

@ -0,0 +1,698 @@
package goquery
import "golang.org/x/net/html"
type siblingType int
// Sibling type, used internally when iterating over children at the same
// level (siblings) to specify which nodes are requested.
const (
siblingPrevUntil siblingType = iota - 3
siblingPrevAll
siblingPrev
siblingAll
siblingNext
siblingNextAll
siblingNextUntil
siblingAllIncludingNonElements
)
// Find gets the descendants of each element in the current set of matched
// elements, filtered by a selector. It returns a new Selection object
// containing these matched elements.
func (s *Selection) Find(selector string) *Selection {
return pushStack(s, findWithMatcher(s.Nodes, compileMatcher(selector)))
}
// FindMatcher gets the descendants of each element in the current set of matched
// elements, filtered by the matcher. It returns a new Selection object
// containing these matched elements.
func (s *Selection) FindMatcher(m Matcher) *Selection {
return pushStack(s, findWithMatcher(s.Nodes, m))
}
// FindSelection gets the descendants of each element in the current
// Selection, filtered by a Selection. It returns a new Selection object
// containing these matched elements.
func (s *Selection) FindSelection(sel *Selection) *Selection {
if sel == nil {
return pushStack(s, nil)
}
return s.FindNodes(sel.Nodes...)
}
// FindNodes gets the descendants of each element in the current
// Selection, filtered by some nodes. It returns a new Selection object
// containing these matched elements.
func (s *Selection) FindNodes(nodes ...*html.Node) *Selection {
return pushStack(s, mapNodes(nodes, func(i int, n *html.Node) []*html.Node {
if sliceContains(s.Nodes, n) {
return []*html.Node{n}
}
return nil
}))
}
// Contents gets the children of each element in the Selection,
// including text and comment nodes. It returns a new Selection object
// containing these elements.
func (s *Selection) Contents() *Selection {
return pushStack(s, getChildrenNodes(s.Nodes, siblingAllIncludingNonElements))
}
// ContentsFiltered gets the children of each element in the Selection,
// filtered by the specified selector. It returns a new Selection
// object containing these elements. Since selectors only act on Element nodes,
// this function is an alias to ChildrenFiltered unless the selector is empty,
// in which case it is an alias to Contents.
func (s *Selection) ContentsFiltered(selector string) *Selection {
if selector != "" {
return s.ChildrenFiltered(selector)
}
return s.Contents()
}
// ContentsMatcher gets the children of each element in the Selection,
// filtered by the specified matcher. It returns a new Selection
// object containing these elements. Since matchers only act on Element nodes,
// this function is an alias to ChildrenMatcher.
func (s *Selection) ContentsMatcher(m Matcher) *Selection {
return s.ChildrenMatcher(m)
}
// Children gets the child elements of each element in the Selection.
// It returns a new Selection object containing these elements.
func (s *Selection) Children() *Selection {
return pushStack(s, getChildrenNodes(s.Nodes, siblingAll))
}
// ChildrenFiltered gets the child elements of each element in the Selection,
// filtered by the specified selector. It returns a new
// Selection object containing these elements.
func (s *Selection) ChildrenFiltered(selector string) *Selection {
return filterAndPush(s, getChildrenNodes(s.Nodes, siblingAll), compileMatcher(selector))
}
// ChildrenMatcher gets the child elements of each element in the Selection,
// filtered by the specified matcher. It returns a new
// Selection object containing these elements.
func (s *Selection) ChildrenMatcher(m Matcher) *Selection {
return filterAndPush(s, getChildrenNodes(s.Nodes, siblingAll), m)
}
// Parent gets the parent of each element in the Selection. It returns a
// new Selection object containing the matched elements.
func (s *Selection) Parent() *Selection {
return pushStack(s, getParentNodes(s.Nodes))
}
// ParentFiltered gets the parent of each element in the Selection filtered by a
// selector. It returns a new Selection object containing the matched elements.
func (s *Selection) ParentFiltered(selector string) *Selection {
return filterAndPush(s, getParentNodes(s.Nodes), compileMatcher(selector))
}
// ParentMatcher gets the parent of each element in the Selection filtered by a
// matcher. It returns a new Selection object containing the matched elements.
func (s *Selection) ParentMatcher(m Matcher) *Selection {
return filterAndPush(s, getParentNodes(s.Nodes), m)
}
// Closest gets the first element that matches the selector by testing the
// element itself and traversing up through its ancestors in the DOM tree.
func (s *Selection) Closest(selector string) *Selection {
cs := compileMatcher(selector)
return s.ClosestMatcher(cs)
}
// ClosestMatcher gets the first element that matches the matcher by testing the
// element itself and traversing up through its ancestors in the DOM tree.
func (s *Selection) ClosestMatcher(m Matcher) *Selection {
return pushStack(s, mapNodes(s.Nodes, func(i int, n *html.Node) []*html.Node {
// For each node in the selection, test the node itself, then each parent
// until a match is found.
for ; n != nil; n = n.Parent {
if m.Match(n) {
return []*html.Node{n}
}
}
return nil
}))
}
// ClosestNodes gets the first element that matches one of the nodes by testing the
// element itself and traversing up through its ancestors in the DOM tree.
func (s *Selection) ClosestNodes(nodes ...*html.Node) *Selection {
set := make(map[*html.Node]bool)
for _, n := range nodes {
set[n] = true
}
return pushStack(s, mapNodes(s.Nodes, func(i int, n *html.Node) []*html.Node {
// For each node in the selection, test the node itself, then each parent
// until a match is found.
for ; n != nil; n = n.Parent {
if set[n] {
return []*html.Node{n}
}
}
return nil
}))
}
// ClosestSelection gets the first element that matches one of the nodes in the
// Selection by testing the element itself and traversing up through its ancestors
// in the DOM tree.
func (s *Selection) ClosestSelection(sel *Selection) *Selection {
if sel == nil {
return pushStack(s, nil)
}
return s.ClosestNodes(sel.Nodes...)
}
// Parents gets the ancestors of each element in the current Selection. It
// returns a new Selection object with the matched elements.
func (s *Selection) Parents() *Selection {
return pushStack(s, getParentsNodes(s.Nodes, nil, nil))
}
// ParentsFiltered gets the ancestors of each element in the current
// Selection. It returns a new Selection object with the matched elements.
func (s *Selection) ParentsFiltered(selector string) *Selection {
return filterAndPush(s, getParentsNodes(s.Nodes, nil, nil), compileMatcher(selector))
}
// ParentsMatcher gets the ancestors of each element in the current
// Selection. It returns a new Selection object with the matched elements.
func (s *Selection) ParentsMatcher(m Matcher) *Selection {
return filterAndPush(s, getParentsNodes(s.Nodes, nil, nil), m)
}
// ParentsUntil gets the ancestors of each element in the Selection, up to but
// not including the element matched by the selector. It returns a new Selection
// object containing the matched elements.
func (s *Selection) ParentsUntil(selector string) *Selection {
return pushStack(s, getParentsNodes(s.Nodes, compileMatcher(selector), nil))
}
// ParentsUntilMatcher gets the ancestors of each element in the Selection, up to but
// not including the element matched by the matcher. It returns a new Selection
// object containing the matched elements.
func (s *Selection) ParentsUntilMatcher(m Matcher) *Selection {
return pushStack(s, getParentsNodes(s.Nodes, m, nil))
}
// ParentsUntilSelection gets the ancestors of each element in the Selection,
// up to but not including the elements in the specified Selection. It returns a
// new Selection object containing the matched elements.
func (s *Selection) ParentsUntilSelection(sel *Selection) *Selection {
if sel == nil {
return s.Parents()
}
return s.ParentsUntilNodes(sel.Nodes...)
}
// ParentsUntilNodes gets the ancestors of each element in the Selection,
// up to but not including the specified nodes. It returns a
// new Selection object containing the matched elements.
func (s *Selection) ParentsUntilNodes(nodes ...*html.Node) *Selection {
return pushStack(s, getParentsNodes(s.Nodes, nil, nodes))
}
// ParentsFilteredUntil is like ParentsUntil, with the option to filter the
// results based on a selector string. It returns a new Selection
// object containing the matched elements.
func (s *Selection) ParentsFilteredUntil(filterSelector, untilSelector string) *Selection {
return filterAndPush(s, getParentsNodes(s.Nodes, compileMatcher(untilSelector), nil), compileMatcher(filterSelector))
}
// ParentsFilteredUntilMatcher is like ParentsUntilMatcher, with the option to filter the
// results based on a matcher. It returns a new Selection object containing the matched elements.
func (s *Selection) ParentsFilteredUntilMatcher(filter, until Matcher) *Selection {
return filterAndPush(s, getParentsNodes(s.Nodes, until, nil), filter)
}
// ParentsFilteredUntilSelection is like ParentsUntilSelection, with the
// option to filter the results based on a selector string. It returns a new
// Selection object containing the matched elements.
func (s *Selection) ParentsFilteredUntilSelection(filterSelector string, sel *Selection) *Selection {
return s.ParentsMatcherUntilSelection(compileMatcher(filterSelector), sel)
}
// ParentsMatcherUntilSelection is like ParentsUntilSelection, with the
// option to filter the results based on a matcher. It returns a new
// Selection object containing the matched elements.
func (s *Selection) ParentsMatcherUntilSelection(filter Matcher, sel *Selection) *Selection {
if sel == nil {
return s.ParentsMatcher(filter)
}
return s.ParentsMatcherUntilNodes(filter, sel.Nodes...)
}
// ParentsFilteredUntilNodes is like ParentsUntilNodes, with the
// option to filter the results based on a selector string. It returns a new
// Selection object containing the matched elements.
func (s *Selection) ParentsFilteredUntilNodes(filterSelector string, nodes ...*html.Node) *Selection {
return filterAndPush(s, getParentsNodes(s.Nodes, nil, nodes), compileMatcher(filterSelector))
}
// ParentsMatcherUntilNodes is like ParentsUntilNodes, with the
// option to filter the results based on a matcher. It returns a new
// Selection object containing the matched elements.
func (s *Selection) ParentsMatcherUntilNodes(filter Matcher, nodes ...*html.Node) *Selection {
return filterAndPush(s, getParentsNodes(s.Nodes, nil, nodes), filter)
}
// Siblings gets the siblings of each element in the Selection. It returns
// a new Selection object containing the matched elements.
func (s *Selection) Siblings() *Selection {
return pushStack(s, getSiblingNodes(s.Nodes, siblingAll, nil, nil))
}
// SiblingsFiltered gets the siblings of each element in the Selection
// filtered by a selector. It returns a new Selection object containing the
// matched elements.
func (s *Selection) SiblingsFiltered(selector string) *Selection {
return filterAndPush(s, getSiblingNodes(s.Nodes, siblingAll, nil, nil), compileMatcher(selector))
}
// SiblingsMatcher gets the siblings of each element in the Selection
// filtered by a matcher. It returns a new Selection object containing the
// matched elements.
func (s *Selection) SiblingsMatcher(m Matcher) *Selection {
return filterAndPush(s, getSiblingNodes(s.Nodes, siblingAll, nil, nil), m)
}
// Next gets the immediately following sibling of each element in the
// Selection. It returns a new Selection object containing the matched elements.
func (s *Selection) Next() *Selection {
return pushStack(s, getSiblingNodes(s.Nodes, siblingNext, nil, nil))
}
// NextFiltered gets the immediately following sibling of each element in the
// Selection filtered by a selector. It returns a new Selection object
// containing the matched elements.
func (s *Selection) NextFiltered(selector string) *Selection {
return filterAndPush(s, getSiblingNodes(s.Nodes, siblingNext, nil, nil), compileMatcher(selector))
}
// NextMatcher gets the immediately following sibling of each element in the
// Selection filtered by a matcher. It returns a new Selection object
// containing the matched elements.
func (s *Selection) NextMatcher(m Matcher) *Selection {
return filterAndPush(s, getSiblingNodes(s.Nodes, siblingNext, nil, nil), m)
}
// NextAll gets all the following siblings of each element in the
// Selection. It returns a new Selection object containing the matched elements.
func (s *Selection) NextAll() *Selection {
return pushStack(s, getSiblingNodes(s.Nodes, siblingNextAll, nil, nil))
}
// NextAllFiltered gets all the following siblings of each element in the
// Selection filtered by a selector. It returns a new Selection object
// containing the matched elements.
func (s *Selection) NextAllFiltered(selector string) *Selection {
return filterAndPush(s, getSiblingNodes(s.Nodes, siblingNextAll, nil, nil), compileMatcher(selector))
}
// NextAllMatcher gets all the following siblings of each element in the
// Selection filtered by a matcher. It returns a new Selection object
// containing the matched elements.
func (s *Selection) NextAllMatcher(m Matcher) *Selection {
return filterAndPush(s, getSiblingNodes(s.Nodes, siblingNextAll, nil, nil), m)
}
// Prev gets the immediately preceding sibling of each element in the
// Selection. It returns a new Selection object containing the matched elements.
func (s *Selection) Prev() *Selection {
return pushStack(s, getSiblingNodes(s.Nodes, siblingPrev, nil, nil))
}
// PrevFiltered gets the immediately preceding sibling of each element in the
// Selection filtered by a selector. It returns a new Selection object
// containing the matched elements.
func (s *Selection) PrevFiltered(selector string) *Selection {
return filterAndPush(s, getSiblingNodes(s.Nodes, siblingPrev, nil, nil), compileMatcher(selector))
}
// PrevMatcher gets the immediately preceding sibling of each element in the
// Selection filtered by a matcher. It returns a new Selection object
// containing the matched elements.
func (s *Selection) PrevMatcher(m Matcher) *Selection {
return filterAndPush(s, getSiblingNodes(s.Nodes, siblingPrev, nil, nil), m)
}
// PrevAll gets all the preceding siblings of each element in the
// Selection. It returns a new Selection object containing the matched elements.
func (s *Selection) PrevAll() *Selection {
return pushStack(s, getSiblingNodes(s.Nodes, siblingPrevAll, nil, nil))
}
// PrevAllFiltered gets all the preceding siblings of each element in the
// Selection filtered by a selector. It returns a new Selection object
// containing the matched elements.
func (s *Selection) PrevAllFiltered(selector string) *Selection {
return filterAndPush(s, getSiblingNodes(s.Nodes, siblingPrevAll, nil, nil), compileMatcher(selector))
}
// PrevAllMatcher gets all the preceding siblings of each element in the
// Selection filtered by a matcher. It returns a new Selection object
// containing the matched elements.
func (s *Selection) PrevAllMatcher(m Matcher) *Selection {
return filterAndPush(s, getSiblingNodes(s.Nodes, siblingPrevAll, nil, nil), m)
}
// NextUntil gets all following siblings of each element up to but not
// including the element matched by the selector. It returns a new Selection
// object containing the matched elements.
func (s *Selection) NextUntil(selector string) *Selection {
return pushStack(s, getSiblingNodes(s.Nodes, siblingNextUntil,
compileMatcher(selector), nil))
}
// NextUntilMatcher gets all following siblings of each element up to but not
// including the element matched by the matcher. It returns a new Selection
// object containing the matched elements.
func (s *Selection) NextUntilMatcher(m Matcher) *Selection {
return pushStack(s, getSiblingNodes(s.Nodes, siblingNextUntil,
m, nil))
}
// NextUntilSelection gets all following siblings of each element up to but not
// including the element matched by the Selection. It returns a new Selection
// object containing the matched elements.
func (s *Selection) NextUntilSelection(sel *Selection) *Selection {
if sel == nil {
return s.NextAll()
}
return s.NextUntilNodes(sel.Nodes...)
}
// NextUntilNodes gets all following siblings of each element up to but not
// including the element matched by the nodes. It returns a new Selection
// object containing the matched elements.
func (s *Selection) NextUntilNodes(nodes ...*html.Node) *Selection {
return pushStack(s, getSiblingNodes(s.Nodes, siblingNextUntil,
nil, nodes))
}
// PrevUntil gets all preceding siblings of each element up to but not
// including the element matched by the selector. It returns a new Selection
// object containing the matched elements.
func (s *Selection) PrevUntil(selector string) *Selection {
return pushStack(s, getSiblingNodes(s.Nodes, siblingPrevUntil,
compileMatcher(selector), nil))
}
// PrevUntilMatcher gets all preceding siblings of each element up to but not
// including the element matched by the matcher. It returns a new Selection
// object containing the matched elements.
func (s *Selection) PrevUntilMatcher(m Matcher) *Selection {
return pushStack(s, getSiblingNodes(s.Nodes, siblingPrevUntil,
m, nil))
}
// PrevUntilSelection gets all preceding siblings of each element up to but not
// including the element matched by the Selection. It returns a new Selection
// object containing the matched elements.
func (s *Selection) PrevUntilSelection(sel *Selection) *Selection {
if sel == nil {
return s.PrevAll()
}
return s.PrevUntilNodes(sel.Nodes...)
}
// PrevUntilNodes gets all preceding siblings of each element up to but not
// including the element matched by the nodes. It returns a new Selection
// object containing the matched elements.
func (s *Selection) PrevUntilNodes(nodes ...*html.Node) *Selection {
return pushStack(s, getSiblingNodes(s.Nodes, siblingPrevUntil,
nil, nodes))
}
// NextFilteredUntil is like NextUntil, with the option to filter
// the results based on a selector string.
// It returns a new Selection object containing the matched elements.
func (s *Selection) NextFilteredUntil(filterSelector, untilSelector string) *Selection {
return filterAndPush(s, getSiblingNodes(s.Nodes, siblingNextUntil,
compileMatcher(untilSelector), nil), compileMatcher(filterSelector))
}
// NextFilteredUntilMatcher is like NextUntilMatcher, with the option to filter
// the results based on a matcher.
// It returns a new Selection object containing the matched elements.
func (s *Selection) NextFilteredUntilMatcher(filter, until Matcher) *Selection {
return filterAndPush(s, getSiblingNodes(s.Nodes, siblingNextUntil,
until, nil), filter)
}
// NextFilteredUntilSelection is like NextUntilSelection, with the
// option to filter the results based on a selector string. It returns a new
// Selection object containing the matched elements.
func (s *Selection) NextFilteredUntilSelection(filterSelector string, sel *Selection) *Selection {
return s.NextMatcherUntilSelection(compileMatcher(filterSelector), sel)
}
// NextMatcherUntilSelection is like NextUntilSelection, with the
// option to filter the results based on a matcher. It returns a new
// Selection object containing the matched elements.
func (s *Selection) NextMatcherUntilSelection(filter Matcher, sel *Selection) *Selection {
if sel == nil {
return s.NextMatcher(filter)
}
return s.NextMatcherUntilNodes(filter, sel.Nodes...)
}
// NextFilteredUntilNodes is like NextUntilNodes, with the
// option to filter the results based on a selector string. It returns a new
// Selection object containing the matched elements.
func (s *Selection) NextFilteredUntilNodes(filterSelector string, nodes ...*html.Node) *Selection {
return filterAndPush(s, getSiblingNodes(s.Nodes, siblingNextUntil,
nil, nodes), compileMatcher(filterSelector))
}
// NextMatcherUntilNodes is like NextUntilNodes, with the
// option to filter the results based on a matcher. It returns a new
// Selection object containing the matched elements.
func (s *Selection) NextMatcherUntilNodes(filter Matcher, nodes ...*html.Node) *Selection {
return filterAndPush(s, getSiblingNodes(s.Nodes, siblingNextUntil,
nil, nodes), filter)
}
// PrevFilteredUntil is like PrevUntil, with the option to filter
// the results based on a selector string.
// It returns a new Selection object containing the matched elements.
func (s *Selection) PrevFilteredUntil(filterSelector, untilSelector string) *Selection {
return filterAndPush(s, getSiblingNodes(s.Nodes, siblingPrevUntil,
compileMatcher(untilSelector), nil), compileMatcher(filterSelector))
}
// PrevFilteredUntilMatcher is like PrevUntilMatcher, with the option to filter
// the results based on a matcher.
// It returns a new Selection object containing the matched elements.
func (s *Selection) PrevFilteredUntilMatcher(filter, until Matcher) *Selection {
return filterAndPush(s, getSiblingNodes(s.Nodes, siblingPrevUntil,
until, nil), filter)
}
// PrevFilteredUntilSelection is like PrevUntilSelection, with the
// option to filter the results based on a selector string. It returns a new
// Selection object containing the matched elements.
func (s *Selection) PrevFilteredUntilSelection(filterSelector string, sel *Selection) *Selection {
return s.PrevMatcherUntilSelection(compileMatcher(filterSelector), sel)
}
// PrevMatcherUntilSelection is like PrevUntilSelection, with the
// option to filter the results based on a matcher. It returns a new
// Selection object containing the matched elements.
func (s *Selection) PrevMatcherUntilSelection(filter Matcher, sel *Selection) *Selection {
if sel == nil {
return s.PrevMatcher(filter)
}
return s.PrevMatcherUntilNodes(filter, sel.Nodes...)
}
// PrevFilteredUntilNodes is like PrevUntilNodes, with the
// option to filter the results based on a selector string. It returns a new
// Selection object containing the matched elements.
func (s *Selection) PrevFilteredUntilNodes(filterSelector string, nodes ...*html.Node) *Selection {
return filterAndPush(s, getSiblingNodes(s.Nodes, siblingPrevUntil,
nil, nodes), compileMatcher(filterSelector))
}
// PrevMatcherUntilNodes is like PrevUntilNodes, with the
// option to filter the results based on a matcher. It returns a new
// Selection object containing the matched elements.
func (s *Selection) PrevMatcherUntilNodes(filter Matcher, nodes ...*html.Node) *Selection {
return filterAndPush(s, getSiblingNodes(s.Nodes, siblingPrevUntil,
nil, nodes), filter)
}
// Filter and push filters the nodes based on a matcher, and pushes the results
// on the stack, with the srcSel as previous selection.
func filterAndPush(srcSel *Selection, nodes []*html.Node, m Matcher) *Selection {
// Create a temporary Selection with the specified nodes to filter using winnow
sel := &Selection{nodes, srcSel.document, nil}
// Filter based on matcher and push on stack
return pushStack(srcSel, winnow(sel, m, true))
}
// Internal implementation of Find that return raw nodes.
func findWithMatcher(nodes []*html.Node, m Matcher) []*html.Node {
// Map nodes to find the matches within the children of each node
return mapNodes(nodes, func(i int, n *html.Node) (result []*html.Node) {
// Go down one level, becausejQuery's Find selects only within descendants
for c := n.FirstChild; c != nil; c = c.NextSibling {
if c.Type == html.ElementNode {
result = append(result, m.MatchAll(c)...)
}
}
return
})
}
// Internal implementation to get all parent nodes, stopping at the specified
// node (or nil if no stop).
func getParentsNodes(nodes []*html.Node, stopm Matcher, stopNodes []*html.Node) []*html.Node {
return mapNodes(nodes, func(i int, n *html.Node) (result []*html.Node) {
for p := n.Parent; p != nil; p = p.Parent {
sel := newSingleSelection(p, nil)
if stopm != nil {
if sel.IsMatcher(stopm) {
break
}
} else if len(stopNodes) > 0 {
if sel.IsNodes(stopNodes...) {
break
}
}
if p.Type == html.ElementNode {
result = append(result, p)
}
}
return
})
}
// Internal implementation of sibling nodes that return a raw slice of matches.
func getSiblingNodes(nodes []*html.Node, st siblingType, untilm Matcher, untilNodes []*html.Node) []*html.Node {
var f func(*html.Node) bool
// If the requested siblings are ...Until, create the test function to
// determine if the until condition is reached (returns true if it is)
if st == siblingNextUntil || st == siblingPrevUntil {
f = func(n *html.Node) bool {
if untilm != nil {
// Matcher-based condition
sel := newSingleSelection(n, nil)
return sel.IsMatcher(untilm)
} else if len(untilNodes) > 0 {
// Nodes-based condition
sel := newSingleSelection(n, nil)
return sel.IsNodes(untilNodes...)
}
return false
}
}
return mapNodes(nodes, func(i int, n *html.Node) []*html.Node {
return getChildrenWithSiblingType(n.Parent, st, n, f)
})
}
// Gets the children nodes of each node in the specified slice of nodes,
// based on the sibling type request.
func getChildrenNodes(nodes []*html.Node, st siblingType) []*html.Node {
return mapNodes(nodes, func(i int, n *html.Node) []*html.Node {
return getChildrenWithSiblingType(n, st, nil, nil)
})
}
// Gets the children of the specified parent, based on the requested sibling
// type, skipping a specified node if required.
func getChildrenWithSiblingType(parent *html.Node, st siblingType, skipNode *html.Node,
untilFunc func(*html.Node) bool) (result []*html.Node) {
// Create the iterator function
var iter = func(cur *html.Node) (ret *html.Node) {
// Based on the sibling type requested, iterate the right way
for {
switch st {
case siblingAll, siblingAllIncludingNonElements:
if cur == nil {
// First iteration, start with first child of parent
// Skip node if required
if ret = parent.FirstChild; ret == skipNode && skipNode != nil {
ret = skipNode.NextSibling
}
} else {
// Skip node if required
if ret = cur.NextSibling; ret == skipNode && skipNode != nil {
ret = skipNode.NextSibling
}
}
case siblingPrev, siblingPrevAll, siblingPrevUntil:
if cur == nil {
// Start with previous sibling of the skip node
ret = skipNode.PrevSibling
} else {
ret = cur.PrevSibling
}
case siblingNext, siblingNextAll, siblingNextUntil:
if cur == nil {
// Start with next sibling of the skip node
ret = skipNode.NextSibling
} else {
ret = cur.NextSibling
}
default:
panic("Invalid sibling type.")
}
if ret == nil || ret.Type == html.ElementNode || st == siblingAllIncludingNonElements {
return
}
// Not a valid node, try again from this one
cur = ret
}
}
for c := iter(nil); c != nil; c = iter(c) {
// If this is an ...Until case, test before append (returns true
// if the until condition is reached)
if st == siblingNextUntil || st == siblingPrevUntil {
if untilFunc(c) {
return
}
}
result = append(result, c)
if st == siblingNext || st == siblingPrev {
// Only one node was requested (immediate next or previous), so exit
return
}
}
return
}
// Internal implementation of parent nodes that return a raw slice of Nodes.
func getParentNodes(nodes []*html.Node) []*html.Node {
return mapNodes(nodes, func(i int, n *html.Node) []*html.Node {
if n.Parent != nil && n.Parent.Type == html.ElementNode {
return []*html.Node{n.Parent}
}
return nil
})
}
// Internal map function used by many traversing methods. Takes the source nodes
// to iterate on and the mapping function that returns an array of nodes.
// Returns an array of nodes mapped by calling the callback function once for
// each node in the source nodes.
func mapNodes(nodes []*html.Node, f func(int, *html.Node) []*html.Node) (result []*html.Node) {
set := make(map[*html.Node]bool)
for i, n := range nodes {
if vals := f(i, n); len(vals) > 0 {
result = appendWithoutDuplicates(result, vals, set)
}
}
return result
}

135
vendor/github.com/PuerkitoBio/goquery/type.go generated vendored Normal file
View file

@ -0,0 +1,135 @@
package goquery
import (
"errors"
"io"
"net/http"
"net/url"
"github.com/andybalholm/cascadia"
"golang.org/x/net/html"
)
// Document represents an HTML document to be manipulated. Unlike jQuery, which
// is loaded as part of a DOM document, and thus acts upon its containing
// document, GoQuery doesn't know which HTML document to act upon. So it needs
// to be told, and that's what the Document class is for. It holds the root
// document node to manipulate, and can make selections on this document.
type Document struct {
*Selection
Url *url.URL
rootNode *html.Node
}
// NewDocumentFromNode is a Document constructor that takes a root html Node
// as argument.
func NewDocumentFromNode(root *html.Node) *Document {
return newDocument(root, nil)
}
// NewDocument is a Document constructor that takes a string URL as argument.
// It loads the specified document, parses it, and stores the root Document
// node, ready to be manipulated.
func NewDocument(url string) (*Document, error) {
// Load the URL
res, e := http.Get(url)
if e != nil {
return nil, e
}
return NewDocumentFromResponse(res)
}
// NewDocumentFromReader returns a Document from a generic reader.
// It returns an error as second value if the reader's data cannot be parsed
// as html. It does *not* check if the reader is also an io.Closer, so the
// provided reader is never closed by this call, it is the responsibility
// of the caller to close it if required.
func NewDocumentFromReader(r io.Reader) (*Document, error) {
root, e := html.Parse(r)
if e != nil {
return nil, e
}
return newDocument(root, nil), nil
}
// NewDocumentFromResponse is another Document constructor that takes an http response as argument.
// It loads the specified response's document, parses it, and stores the root Document
// node, ready to be manipulated. The response's body is closed on return.
func NewDocumentFromResponse(res *http.Response) (*Document, error) {
if res == nil {
return nil, errors.New("Response is nil")
}
defer res.Body.Close()
if res.Request == nil {
return nil, errors.New("Response.Request is nil")
}
// Parse the HTML into nodes
root, e := html.Parse(res.Body)
if e != nil {
return nil, e
}
// Create and fill the document
return newDocument(root, res.Request.URL), nil
}
// CloneDocument creates a deep-clone of a document.
func CloneDocument(doc *Document) *Document {
return newDocument(cloneNode(doc.rootNode), doc.Url)
}
// Private constructor, make sure all fields are correctly filled.
func newDocument(root *html.Node, url *url.URL) *Document {
// Create and fill the document
d := &Document{nil, url, root}
d.Selection = newSingleSelection(root, d)
return d
}
// Selection represents a collection of nodes matching some criteria. The
// initial Selection can be created by using Document.Find, and then
// manipulated using the jQuery-like chainable syntax and methods.
type Selection struct {
Nodes []*html.Node
document *Document
prevSel *Selection
}
// Helper constructor to create an empty selection
func newEmptySelection(doc *Document) *Selection {
return &Selection{nil, doc, nil}
}
// Helper constructor to create a selection of only one node
func newSingleSelection(node *html.Node, doc *Document) *Selection {
return &Selection{[]*html.Node{node}, doc, nil}
}
// Matcher is an interface that defines the methods to match
// HTML nodes against a compiled selector string. Cascadia's
// Selector implements this interface.
type Matcher interface {
Match(*html.Node) bool
MatchAll(*html.Node) []*html.Node
Filter([]*html.Node) []*html.Node
}
// compileMatcher compiles the selector string s and returns
// the corresponding Matcher. If s is an invalid selector string,
// it returns a Matcher that fails all matches.
func compileMatcher(s string) Matcher {
cs, err := cascadia.Compile(s)
if err != nil {
return invalidMatcher{}
}
return cs
}
// invalidMatcher is a Matcher that always fails to match.
type invalidMatcher struct{}
func (invalidMatcher) Match(n *html.Node) bool { return false }
func (invalidMatcher) MatchAll(n *html.Node) []*html.Node { return nil }
func (invalidMatcher) Filter(ns []*html.Node) []*html.Node { return nil }

161
vendor/github.com/PuerkitoBio/goquery/utilities.go generated vendored Normal file
View file

@ -0,0 +1,161 @@
package goquery
import (
"bytes"
"golang.org/x/net/html"
)
// used to determine if a set (map[*html.Node]bool) should be used
// instead of iterating over a slice. The set uses more memory and
// is slower than slice iteration for small N.
const minNodesForSet = 1000
var nodeNames = []string{
html.ErrorNode: "#error",
html.TextNode: "#text",
html.DocumentNode: "#document",
html.CommentNode: "#comment",
}
// NodeName returns the node name of the first element in the selection.
// It tries to behave in a similar way as the DOM's nodeName property
// (https://developer.mozilla.org/en-US/docs/Web/API/Node/nodeName).
//
// Go's net/html package defines the following node types, listed with
// the corresponding returned value from this function:
//
// ErrorNode : #error
// TextNode : #text
// DocumentNode : #document
// ElementNode : the element's tag name
// CommentNode : #comment
// DoctypeNode : the name of the document type
//
func NodeName(s *Selection) string {
if s.Length() == 0 {
return ""
}
switch n := s.Get(0); n.Type {
case html.ElementNode, html.DoctypeNode:
return n.Data
default:
if n.Type >= 0 && int(n.Type) < len(nodeNames) {
return nodeNames[n.Type]
}
return ""
}
}
// OuterHtml returns the outer HTML rendering of the first item in
// the selection - that is, the HTML including the first element's
// tag and attributes.
//
// Unlike InnerHtml, this is a function and not a method on the Selection,
// because this is not a jQuery method (in javascript-land, this is
// a property provided by the DOM).
func OuterHtml(s *Selection) (string, error) {
var buf bytes.Buffer
if s.Length() == 0 {
return "", nil
}
n := s.Get(0)
if err := html.Render(&buf, n); err != nil {
return "", err
}
return buf.String(), nil
}
// Loop through all container nodes to search for the target node.
func sliceContains(container []*html.Node, contained *html.Node) bool {
for _, n := range container {
if nodeContains(n, contained) {
return true
}
}
return false
}
// Checks if the contained node is within the container node.
func nodeContains(container *html.Node, contained *html.Node) bool {
// Check if the parent of the contained node is the container node, traversing
// upward until the top is reached, or the container is found.
for contained = contained.Parent; contained != nil; contained = contained.Parent {
if container == contained {
return true
}
}
return false
}
// Checks if the target node is in the slice of nodes.
func isInSlice(slice []*html.Node, node *html.Node) bool {
return indexInSlice(slice, node) > -1
}
// Returns the index of the target node in the slice, or -1.
func indexInSlice(slice []*html.Node, node *html.Node) int {
if node != nil {
for i, n := range slice {
if n == node {
return i
}
}
}
return -1
}
// Appends the new nodes to the target slice, making sure no duplicate is added.
// There is no check to the original state of the target slice, so it may still
// contain duplicates. The target slice is returned because append() may create
// a new underlying array. If targetSet is nil, a local set is created with the
// target if len(target) + len(nodes) is greater than minNodesForSet.
func appendWithoutDuplicates(target []*html.Node, nodes []*html.Node, targetSet map[*html.Node]bool) []*html.Node {
// if there are not that many nodes, don't use the map, faster to just use nested loops
// (unless a non-nil targetSet is passed, in which case the caller knows better).
if targetSet == nil && len(target)+len(nodes) < minNodesForSet {
for _, n := range nodes {
if !isInSlice(target, n) {
target = append(target, n)
}
}
return target
}
// if a targetSet is passed, then assume it is reliable, otherwise create one
// and initialize it with the current target contents.
if targetSet == nil {
targetSet = make(map[*html.Node]bool, len(target))
for _, n := range target {
targetSet[n] = true
}
}
for _, n := range nodes {
if !targetSet[n] {
target = append(target, n)
targetSet[n] = true
}
}
return target
}
// Loop through a selection, returning only those nodes that pass the predicate
// function.
func grep(sel *Selection, predicate func(i int, s *Selection) bool) (result []*html.Node) {
for i, n := range sel.Nodes {
if predicate(i, newSingleSelection(n, sel.document)) {
result = append(result, n)
}
}
return result
}
// Creates a new Selection object based on the specified nodes, and keeps the
// source Selection object on the stack (linked list).
func pushStack(fromSel *Selection, nodes []*html.Node) *Selection {
result := &Selection{nodes, fromSel.document, fromSel}
return result
}

66
vendor/github.com/Sirupsen/logrus/CHANGELOG.md generated vendored Normal file
View file

@ -0,0 +1,66 @@
# 0.10.0
* feature: Add a test hook (#180)
* feature: `ParseLevel` is now case-insensitive (#326)
* feature: `FieldLogger` interface that generalizes `Logger` and `Entry` (#308)
* performance: avoid re-allocations on `WithFields` (#335)
# 0.9.0
* logrus/text_formatter: don't emit empty msg
* logrus/hooks/airbrake: move out of main repository
* logrus/hooks/sentry: move out of main repository
* logrus/hooks/papertrail: move out of main repository
* logrus/hooks/bugsnag: move out of main repository
* logrus/core: run tests with `-race`
* logrus/core: detect TTY based on `stderr`
* logrus/core: support `WithError` on logger
* logrus/core: Solaris support
# 0.8.7
* logrus/core: fix possible race (#216)
* logrus/doc: small typo fixes and doc improvements
# 0.8.6
* hooks/raven: allow passing an initialized client
# 0.8.5
* logrus/core: revert #208
# 0.8.4
* formatter/text: fix data race (#218)
# 0.8.3
* logrus/core: fix entry log level (#208)
* logrus/core: improve performance of text formatter by 40%
* logrus/core: expose `LevelHooks` type
* logrus/core: add support for DragonflyBSD and NetBSD
* formatter/text: print structs more verbosely
# 0.8.2
* logrus: fix more Fatal family functions
# 0.8.1
* logrus: fix not exiting on `Fatalf` and `Fatalln`
# 0.8.0
* logrus: defaults to stderr instead of stdout
* hooks/sentry: add special field for `*http.Request`
* formatter/text: ignore Windows for colors
# 0.7.3
* formatter/\*: allow configuration of timestamp layout
# 0.7.2
* formatter/text: Add configuration option for time format (#158)

21
vendor/github.com/Sirupsen/logrus/LICENSE generated vendored Normal file
View file

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2014 Simon Eskildsen
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

432
vendor/github.com/Sirupsen/logrus/README.md generated vendored Normal file
View file

@ -0,0 +1,432 @@
# Logrus <img src="http://i.imgur.com/hTeVwmJ.png" width="40" height="40" alt=":walrus:" class="emoji" title=":walrus:"/>&nbsp;[![Build Status](https://travis-ci.org/Sirupsen/logrus.svg?branch=master)](https://travis-ci.org/Sirupsen/logrus)&nbsp;[![GoDoc](https://godoc.org/github.com/Sirupsen/logrus?status.svg)](https://godoc.org/github.com/Sirupsen/logrus)
**Seeing weird case-sensitive problems?** See [this
issue](https://github.com/sirupsen/logrus/issues/451#issuecomment-264332021).
This change has been reverted. I apologize for causing this. I greatly
underestimated the impact this would have. Logrus strives for stability and
backwards compatibility and failed to provide that.
Logrus is a structured logger for Go (golang), completely API compatible with
the standard library logger. [Godoc][godoc]. **Please note the Logrus API is not
yet stable (pre 1.0). Logrus itself is completely stable and has been used in
many large deployments. The core API is unlikely to change much but please
version control your Logrus to make sure you aren't fetching latest `master` on
every build.**
Nicely color-coded in development (when a TTY is attached, otherwise just
plain text):
![Colored](http://i.imgur.com/PY7qMwd.png)
With `log.SetFormatter(&log.JSONFormatter{})`, for easy parsing by logstash
or Splunk:
```json
{"animal":"walrus","level":"info","msg":"A group of walrus emerges from the
ocean","size":10,"time":"2014-03-10 19:57:38.562264131 -0400 EDT"}
{"level":"warning","msg":"The group's number increased tremendously!",
"number":122,"omg":true,"time":"2014-03-10 19:57:38.562471297 -0400 EDT"}
{"animal":"walrus","level":"info","msg":"A giant walrus appears!",
"size":10,"time":"2014-03-10 19:57:38.562500591 -0400 EDT"}
{"animal":"walrus","level":"info","msg":"Tremendously sized cow enters the ocean.",
"size":9,"time":"2014-03-10 19:57:38.562527896 -0400 EDT"}
{"level":"fatal","msg":"The ice breaks!","number":100,"omg":true,
"time":"2014-03-10 19:57:38.562543128 -0400 EDT"}
```
With the default `log.SetFormatter(&log.TextFormatter{})` when a TTY is not
attached, the output is compatible with the
[logfmt](http://godoc.org/github.com/kr/logfmt) format:
```text
time="2015-03-26T01:27:38-04:00" level=debug msg="Started observing beach" animal=walrus number=8
time="2015-03-26T01:27:38-04:00" level=info msg="A group of walrus emerges from the ocean" animal=walrus size=10
time="2015-03-26T01:27:38-04:00" level=warning msg="The group's number increased tremendously!" number=122 omg=true
time="2015-03-26T01:27:38-04:00" level=debug msg="Temperature changes" temperature=-4
time="2015-03-26T01:27:38-04:00" level=panic msg="It's over 9000!" animal=orca size=9009
time="2015-03-26T01:27:38-04:00" level=fatal msg="The ice breaks!" err=&{0x2082280c0 map[animal:orca size:9009] 2015-03-26 01:27:38.441574009 -0400 EDT panic It's over 9000!} number=100 omg=true
exit status 1
```
#### Example
The simplest way to use Logrus is simply the package-level exported logger:
```go
package main
import (
log "github.com/Sirupsen/logrus"
)
func main() {
log.WithFields(log.Fields{
"animal": "walrus",
}).Info("A walrus appears")
}
```
Note that it's completely api-compatible with the stdlib logger, so you can
replace your `log` imports everywhere with `log "github.com/Sirupsen/logrus"`
and you'll now have the flexibility of Logrus. You can customize it all you
want:
```go
package main
import (
"os"
log "github.com/Sirupsen/logrus"
)
func init() {
// Log as JSON instead of the default ASCII formatter.
log.SetFormatter(&log.JSONFormatter{})
// Output to stderr instead of stdout, could also be a file.
log.SetOutput(os.Stderr)
// Only log the warning severity or above.
log.SetLevel(log.WarnLevel)
}
func main() {
log.WithFields(log.Fields{
"animal": "walrus",
"size": 10,
}).Info("A group of walrus emerges from the ocean")
log.WithFields(log.Fields{
"omg": true,
"number": 122,
}).Warn("The group's number increased tremendously!")
log.WithFields(log.Fields{
"omg": true,
"number": 100,
}).Fatal("The ice breaks!")
// A common pattern is to re-use fields between logging statements by re-using
// the logrus.Entry returned from WithFields()
contextLogger := log.WithFields(log.Fields{
"common": "this is a common field",
"other": "I also should be logged always",
})
contextLogger.Info("I'll be logged with common and other field")
contextLogger.Info("Me too")
}
```
For more advanced usage such as logging to multiple locations from the same
application, you can also create an instance of the `logrus` Logger:
```go
package main
import (
"github.com/Sirupsen/logrus"
)
// Create a new instance of the logger. You can have any number of instances.
var log = logrus.New()
func main() {
// The API for setting attributes is a little different than the package level
// exported logger. See Godoc.
log.Out = os.Stderr
log.WithFields(logrus.Fields{
"animal": "walrus",
"size": 10,
}).Info("A group of walrus emerges from the ocean")
}
```
#### Fields
Logrus encourages careful, structured logging though logging fields instead of
long, unparseable error messages. For example, instead of: `log.Fatalf("Failed
to send event %s to topic %s with key %d")`, you should log the much more
discoverable:
```go
log.WithFields(log.Fields{
"event": event,
"topic": topic,
"key": key,
}).Fatal("Failed to send event")
```
We've found this API forces you to think about logging in a way that produces
much more useful logging messages. We've been in countless situations where just
a single added field to a log statement that was already there would've saved us
hours. The `WithFields` call is optional.
In general, with Logrus using any of the `printf`-family functions should be
seen as a hint you should add a field, however, you can still use the
`printf`-family functions with Logrus.
#### Hooks
You can add hooks for logging levels. For example to send errors to an exception
tracking service on `Error`, `Fatal` and `Panic`, info to StatsD or log to
multiple places simultaneously, e.g. syslog.
Logrus comes with [built-in hooks](hooks/). Add those, or your custom hook, in
`init`:
```go
import (
log "github.com/Sirupsen/logrus"
"gopkg.in/gemnasium/logrus-airbrake-hook.v2" // the package is named "aibrake"
logrus_syslog "github.com/Sirupsen/logrus/hooks/syslog"
"log/syslog"
)
func init() {
// Use the Airbrake hook to report errors that have Error severity or above to
// an exception tracker. You can create custom hooks, see the Hooks section.
log.AddHook(airbrake.NewHook(123, "xyz", "production"))
hook, err := logrus_syslog.NewSyslogHook("udp", "localhost:514", syslog.LOG_INFO, "")
if err != nil {
log.Error("Unable to connect to local syslog daemon")
} else {
log.AddHook(hook)
}
}
```
Note: Syslog hook also support connecting to local syslog (Ex. "/dev/log" or "/var/run/syslog" or "/var/run/log"). For the detail, please check the [syslog hook README](hooks/syslog/README.md).
| Hook | Description |
| ----- | ----------- |
| [Airbrake](https://github.com/gemnasium/logrus-airbrake-hook) | Send errors to the Airbrake API V3. Uses the official [`gobrake`](https://github.com/airbrake/gobrake) behind the scenes. |
| [Airbrake "legacy"](https://github.com/gemnasium/logrus-airbrake-legacy-hook) | Send errors to an exception tracking service compatible with the Airbrake API V2. Uses [`airbrake-go`](https://github.com/tobi/airbrake-go) behind the scenes. |
| [Papertrail](https://github.com/polds/logrus-papertrail-hook) | Send errors to the [Papertrail](https://papertrailapp.com) hosted logging service via UDP. |
| [Syslog](https://github.com/Sirupsen/logrus/blob/master/hooks/syslog/syslog.go) | Send errors to remote syslog server. Uses standard library `log/syslog` behind the scenes. |
| [Bugsnag](https://github.com/Shopify/logrus-bugsnag/blob/master/bugsnag.go) | Send errors to the Bugsnag exception tracking service. |
| [Sentry](https://github.com/evalphobia/logrus_sentry) | Send errors to the Sentry error logging and aggregation service. |
| [Hiprus](https://github.com/nubo/hiprus) | Send errors to a channel in hipchat. |
| [Logrusly](https://github.com/sebest/logrusly) | Send logs to [Loggly](https://www.loggly.com/) |
| [Slackrus](https://github.com/johntdyer/slackrus) | Hook for Slack chat. |
| [Journalhook](https://github.com/wercker/journalhook) | Hook for logging to `systemd-journald` |
| [Graylog](https://github.com/gemnasium/logrus-graylog-hook) | Hook for logging to [Graylog](http://graylog2.org/) |
| [Raygun](https://github.com/squirkle/logrus-raygun-hook) | Hook for logging to [Raygun.io](http://raygun.io/) |
| [LFShook](https://github.com/rifflock/lfshook) | Hook for logging to the local filesystem |
| [Honeybadger](https://github.com/agonzalezro/logrus_honeybadger) | Hook for sending exceptions to Honeybadger |
| [Mail](https://github.com/zbindenren/logrus_mail) | Hook for sending exceptions via mail |
| [Rollrus](https://github.com/heroku/rollrus) | Hook for sending errors to rollbar |
| [Fluentd](https://github.com/evalphobia/logrus_fluent) | Hook for logging to fluentd |
| [Mongodb](https://github.com/weekface/mgorus) | Hook for logging to mongodb |
| [Influxus] (http://github.com/vlad-doru/influxus) | Hook for concurrently logging to [InfluxDB] (http://influxdata.com/) |
| [InfluxDB](https://github.com/Abramovic/logrus_influxdb) | Hook for logging to influxdb |
| [Octokit](https://github.com/dorajistyle/logrus-octokit-hook) | Hook for logging to github via octokit |
| [DeferPanic](https://github.com/deferpanic/dp-logrus) | Hook for logging to DeferPanic |
| [Redis-Hook](https://github.com/rogierlommers/logrus-redis-hook) | Hook for logging to a ELK stack (through Redis) |
| [Amqp-Hook](https://github.com/vladoatanasov/logrus_amqp) | Hook for logging to Amqp broker (Like RabbitMQ) |
| [KafkaLogrus](https://github.com/goibibo/KafkaLogrus) | Hook for logging to kafka |
| [Typetalk](https://github.com/dragon3/logrus-typetalk-hook) | Hook for logging to [Typetalk](https://www.typetalk.in/) |
| [ElasticSearch](https://github.com/sohlich/elogrus) | Hook for logging to ElasticSearch|
| [Sumorus](https://github.com/doublefree/sumorus) | Hook for logging to [SumoLogic](https://www.sumologic.com/)|
| [Scribe](https://github.com/sagar8192/logrus-scribe-hook) | Hook for logging to [Scribe](https://github.com/facebookarchive/scribe)|
| [Logstash](https://github.com/bshuster-repo/logrus-logstash-hook) | Hook for logging to [Logstash](https://www.elastic.co/products/logstash) |
| [logz.io](https://github.com/ripcurld00d/logrus-logzio-hook) | Hook for logging to [logz.io](https://logz.io), a Log as a Service using Logstash |
| [Logmatic.io](https://github.com/logmatic/logmatic-go) | Hook for logging to [Logmatic.io](http://logmatic.io/) |
| [Pushover](https://github.com/toorop/logrus_pushover) | Send error via [Pushover](https://pushover.net) |
| [PostgreSQL](https://github.com/gemnasium/logrus-postgresql-hook) | Send logs to [PostgreSQL](http://postgresql.org) |
#### Level logging
Logrus has six logging levels: Debug, Info, Warning, Error, Fatal and Panic.
```go
log.Debug("Useful debugging information.")
log.Info("Something noteworthy happened!")
log.Warn("You should probably take a look at this.")
log.Error("Something failed but I'm not quitting.")
// Calls os.Exit(1) after logging
log.Fatal("Bye.")
// Calls panic() after logging
log.Panic("I'm bailing.")
```
You can set the logging level on a `Logger`, then it will only log entries with
that severity or anything above it:
```go
// Will log anything that is info or above (warn, error, fatal, panic). Default.
log.SetLevel(log.InfoLevel)
```
It may be useful to set `log.Level = logrus.DebugLevel` in a debug or verbose
environment if your application has that.
#### Entries
Besides the fields added with `WithField` or `WithFields` some fields are
automatically added to all logging events:
1. `time`. The timestamp when the entry was created.
2. `msg`. The logging message passed to `{Info,Warn,Error,Fatal,Panic}` after
the `AddFields` call. E.g. `Failed to send event.`
3. `level`. The logging level. E.g. `info`.
#### Environments
Logrus has no notion of environment.
If you wish for hooks and formatters to only be used in specific environments,
you should handle that yourself. For example, if your application has a global
variable `Environment`, which is a string representation of the environment you
could do:
```go
import (
log "github.com/Sirupsen/logrus"
)
init() {
// do something here to set environment depending on an environment variable
// or command-line flag
if Environment == "production" {
log.SetFormatter(&log.JSONFormatter{})
} else {
// The TextFormatter is default, you don't actually have to do this.
log.SetFormatter(&log.TextFormatter{})
}
}
```
This configuration is how `logrus` was intended to be used, but JSON in
production is mostly only useful if you do log aggregation with tools like
Splunk or Logstash.
#### Formatters
The built-in logging formatters are:
* `logrus.TextFormatter`. Logs the event in colors if stdout is a tty, otherwise
without colors.
* *Note:* to force colored output when there is no TTY, set the `ForceColors`
field to `true`. To force no colored output even if there is a TTY set the
`DisableColors` field to `true`
* `logrus.JSONFormatter`. Logs fields as JSON.
Third party logging formatters:
* [`logstash`](https://github.com/bshuster-repo/logrus-logstash-hook). Logs fields as [Logstash](http://logstash.net) Events.
* [`prefixed`](https://github.com/x-cray/logrus-prefixed-formatter). Displays log entry source along with alternative layout.
* [`zalgo`](https://github.com/aybabtme/logzalgo). Invoking the P͉̫o̳̼̊w̖͈̰͎e̬͔̭͂r͚̼̹̲ ̫͓͉̳͈ō̠͕͖̚f̝͍̠ ͕̲̞͖͑Z̖̫̤̫ͪa͉̬͈̗l͖͎g̳̥o̰̥̅!̣͔̲̻͊̄ ̙̘̦̹̦.
You can define your formatter by implementing the `Formatter` interface,
requiring a `Format` method. `Format` takes an `*Entry`. `entry.Data` is a
`Fields` type (`map[string]interface{}`) with all your fields as well as the
default ones (see Entries section above):
```go
type MyJSONFormatter struct {
}
log.SetFormatter(new(MyJSONFormatter))
func (f *MyJSONFormatter) Format(entry *Entry) ([]byte, error) {
// Note this doesn't include Time, Level and Message which are available on
// the Entry. Consult `godoc` on information about those fields or read the
// source of the official loggers.
serialized, err := json.Marshal(entry.Data)
if err != nil {
return nil, fmt.Errorf("Failed to marshal fields to JSON, %v", err)
}
return append(serialized, '\n'), nil
}
```
#### Logger as an `io.Writer`
Logrus can be transformed into an `io.Writer`. That writer is the end of an `io.Pipe` and it is your responsibility to close it.
```go
w := logger.Writer()
defer w.Close()
srv := http.Server{
// create a stdlib log.Logger that writes to
// logrus.Logger.
ErrorLog: log.New(w, "", 0),
}
```
Each line written to that writer will be printed the usual way, using formatters
and hooks. The level for those entries is `info`.
#### Rotation
Log rotation is not provided with Logrus. Log rotation should be done by an
external program (like `logrotate(8)`) that can compress and delete old log
entries. It should not be a feature of the application-level logger.
#### Tools
| Tool | Description |
| ---- | ----------- |
|[Logrus Mate](https://github.com/gogap/logrus_mate)|Logrus mate is a tool for Logrus to manage loggers, you can initial logger's level, hook and formatter by config file, the logger will generated with different config at different environment.|
|[Logrus Viper Helper](https://github.com/heirko/go-contrib/tree/master/logrusHelper)|An Helper arround Logrus to wrap with spf13/Viper to load configuration with fangs! And to simplify Logrus configuration use some behavior of [Logrus Mate](https://github.com/gogap/logrus_mate). [sample](https://github.com/heirko/iris-contrib/blob/master/middleware/logrus-logger/example) |
#### Testing
Logrus has a built in facility for asserting the presence of log messages. This is implemented through the `test` hook and provides:
* decorators for existing logger (`test.NewLocal` and `test.NewGlobal`) which basically just add the `test` hook
* a test logger (`test.NewNullLogger`) that just records log messages (and does not output any):
```go
logger, hook := NewNullLogger()
logger.Error("Hello error")
assert.Equal(1, len(hook.Entries))
assert.Equal(logrus.ErrorLevel, hook.LastEntry().Level)
assert.Equal("Hello error", hook.LastEntry().Message)
hook.Reset()
assert.Nil(hook.LastEntry())
```
#### Fatal handlers
Logrus can register one or more functions that will be called when any `fatal`
level message is logged. The registered handlers will be executed before
logrus performs a `os.Exit(1)`. This behavior may be helpful if callers need
to gracefully shutdown. Unlike a `panic("Something went wrong...")` call which can be intercepted with a deferred `recover` a call to `os.Exit(1)` can not be intercepted.
```
...
handler := func() {
// gracefully shutdown something...
}
logrus.RegisterExitHandler(handler)
...
```
#### Thread safety
By default Logger is protected by mutex for concurrent writes, this mutex is invoked when calling hooks and writing logs.
If you are sure such locking is not needed, you can call logger.SetNoLock() to disable the locking.
Situation when locking is not needed includes:
* You have no hooks registered, or hooks calling is already thread-safe.
* Writing to logger.Out is already thread-safe, for example:
1) logger.Out is protected by locks.
2) logger.Out is a os.File handler opened with `O_APPEND` flag, and every write is smaller than 4k. (This allow multi-thread/multi-process writing)
(Refer to http://www.notthewizard.com/2014/06/17/are-files-appends-really-atomic/)

64
vendor/github.com/Sirupsen/logrus/alt_exit.go generated vendored Normal file
View file

@ -0,0 +1,64 @@
package logrus
// The following code was sourced and modified from the
// https://bitbucket.org/tebeka/atexit package governed by the following license:
//
// Copyright (c) 2012 Miki Tebeka <miki.tebeka@gmail.com>.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy of
// this software and associated documentation files (the "Software"), to deal in
// the Software without restriction, including without limitation the rights to
// use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
// the Software, and to permit persons to whom the Software is furnished to do so,
// subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in all
// copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
// FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
// COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
// IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
// CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import (
"fmt"
"os"
)
var handlers = []func(){}
func runHandler(handler func()) {
defer func() {
if err := recover(); err != nil {
fmt.Fprintln(os.Stderr, "Error: Logrus exit handler error:", err)
}
}()
handler()
}
func runHandlers() {
for _, handler := range handlers {
runHandler(handler)
}
}
// Exit runs all the Logrus atexit handlers and then terminates the program using os.Exit(code)
func Exit(code int) {
runHandlers()
os.Exit(code)
}
// RegisterExitHandler adds a Logrus Exit handler, call logrus.Exit to invoke
// all handlers. The handlers will also be invoked when any Fatal log entry is
// made.
//
// This method is useful when a caller wishes to use logrus to log a fatal
// message but also needs to gracefully shutdown. An example usecase could be
// closing database connections, or sending a alert that the application is
// closing.
func RegisterExitHandler(handler func()) {
handlers = append(handlers, handler)
}

26
vendor/github.com/Sirupsen/logrus/doc.go generated vendored Normal file
View file

@ -0,0 +1,26 @@
/*
Package logrus is a structured logger for Go, completely API compatible with the standard library logger.
The simplest way to use Logrus is simply the package-level exported logger:
package main
import (
log "github.com/Sirupsen/logrus"
)
func main() {
log.WithFields(log.Fields{
"animal": "walrus",
"number": 1,
"size": 10,
}).Info("A walrus appears")
}
Output:
time="2015-09-07T08:48:33Z" level=info msg="A walrus appears" animal=walrus number=1 size=10
For a full guide visit https://github.com/Sirupsen/logrus
*/
package logrus

275
vendor/github.com/Sirupsen/logrus/entry.go generated vendored Normal file
View file

@ -0,0 +1,275 @@
package logrus
import (
"bytes"
"fmt"
"os"
"sync"
"time"
)
var bufferPool *sync.Pool
func init() {
bufferPool = &sync.Pool{
New: func() interface{} {
return new(bytes.Buffer)
},
}
}
// Defines the key when adding errors using WithError.
var ErrorKey = "error"
// An entry is the final or intermediate Logrus logging entry. It contains all
// the fields passed with WithField{,s}. It's finally logged when Debug, Info,
// Warn, Error, Fatal or Panic is called on it. These objects can be reused and
// passed around as much as you wish to avoid field duplication.
type Entry struct {
Logger *Logger
// Contains all the fields set by the user.
Data Fields
// Time at which the log entry was created
Time time.Time
// Level the log entry was logged at: Debug, Info, Warn, Error, Fatal or Panic
Level Level
// Message passed to Debug, Info, Warn, Error, Fatal or Panic
Message string
// When formatter is called in entry.log(), an Buffer may be set to entry
Buffer *bytes.Buffer
}
func NewEntry(logger *Logger) *Entry {
return &Entry{
Logger: logger,
// Default is three fields, give a little extra room
Data: make(Fields, 5),
}
}
// Returns the string representation from the reader and ultimately the
// formatter.
func (entry *Entry) String() (string, error) {
serialized, err := entry.Logger.Formatter.Format(entry)
if err != nil {
return "", err
}
str := string(serialized)
return str, nil
}
// Add an error as single field (using the key defined in ErrorKey) to the Entry.
func (entry *Entry) WithError(err error) *Entry {
return entry.WithField(ErrorKey, err)
}
// Add a single field to the Entry.
func (entry *Entry) WithField(key string, value interface{}) *Entry {
return entry.WithFields(Fields{key: value})
}
// Add a map of fields to the Entry.
func (entry *Entry) WithFields(fields Fields) *Entry {
data := make(Fields, len(entry.Data)+len(fields))
for k, v := range entry.Data {
data[k] = v
}
for k, v := range fields {
data[k] = v
}
return &Entry{Logger: entry.Logger, Data: data}
}
// This function is not declared with a pointer value because otherwise
// race conditions will occur when using multiple goroutines
func (entry Entry) log(level Level, msg string) {
var buffer *bytes.Buffer
entry.Time = time.Now()
entry.Level = level
entry.Message = msg
if err := entry.Logger.Hooks.Fire(level, &entry); err != nil {
entry.Logger.mu.Lock()
fmt.Fprintf(os.Stderr, "Failed to fire hook: %v\n", err)
entry.Logger.mu.Unlock()
}
buffer = bufferPool.Get().(*bytes.Buffer)
buffer.Reset()
defer bufferPool.Put(buffer)
entry.Buffer = buffer
serialized, err := entry.Logger.Formatter.Format(&entry)
entry.Buffer = nil
if err != nil {
entry.Logger.mu.Lock()
fmt.Fprintf(os.Stderr, "Failed to obtain reader, %v\n", err)
entry.Logger.mu.Unlock()
} else {
entry.Logger.mu.Lock()
_, err = entry.Logger.Out.Write(serialized)
if err != nil {
fmt.Fprintf(os.Stderr, "Failed to write to log, %v\n", err)
}
entry.Logger.mu.Unlock()
}
// To avoid Entry#log() returning a value that only would make sense for
// panic() to use in Entry#Panic(), we avoid the allocation by checking
// directly here.
if level <= PanicLevel {
panic(&entry)
}
}
func (entry *Entry) Debug(args ...interface{}) {
if entry.Logger.Level >= DebugLevel {
entry.log(DebugLevel, fmt.Sprint(args...))
}
}
func (entry *Entry) Print(args ...interface{}) {
entry.Info(args...)
}
func (entry *Entry) Info(args ...interface{}) {
if entry.Logger.Level >= InfoLevel {
entry.log(InfoLevel, fmt.Sprint(args...))
}
}
func (entry *Entry) Warn(args ...interface{}) {
if entry.Logger.Level >= WarnLevel {
entry.log(WarnLevel, fmt.Sprint(args...))
}
}
func (entry *Entry) Warning(args ...interface{}) {
entry.Warn(args...)
}
func (entry *Entry) Error(args ...interface{}) {
if entry.Logger.Level >= ErrorLevel {
entry.log(ErrorLevel, fmt.Sprint(args...))
}
}
func (entry *Entry) Fatal(args ...interface{}) {
if entry.Logger.Level >= FatalLevel {
entry.log(FatalLevel, fmt.Sprint(args...))
}
Exit(1)
}
func (entry *Entry) Panic(args ...interface{}) {
if entry.Logger.Level >= PanicLevel {
entry.log(PanicLevel, fmt.Sprint(args...))
}
panic(fmt.Sprint(args...))
}
// Entry Printf family functions
func (entry *Entry) Debugf(format string, args ...interface{}) {
if entry.Logger.Level >= DebugLevel {
entry.Debug(fmt.Sprintf(format, args...))
}
}
func (entry *Entry) Infof(format string, args ...interface{}) {
if entry.Logger.Level >= InfoLevel {
entry.Info(fmt.Sprintf(format, args...))
}
}
func (entry *Entry) Printf(format string, args ...interface{}) {
entry.Infof(format, args...)
}
func (entry *Entry) Warnf(format string, args ...interface{}) {
if entry.Logger.Level >= WarnLevel {
entry.Warn(fmt.Sprintf(format, args...))
}
}
func (entry *Entry) Warningf(format string, args ...interface{}) {
entry.Warnf(format, args...)
}
func (entry *Entry) Errorf(format string, args ...interface{}) {
if entry.Logger.Level >= ErrorLevel {
entry.Error(fmt.Sprintf(format, args...))
}
}
func (entry *Entry) Fatalf(format string, args ...interface{}) {
if entry.Logger.Level >= FatalLevel {
entry.Fatal(fmt.Sprintf(format, args...))
}
Exit(1)
}
func (entry *Entry) Panicf(format string, args ...interface{}) {
if entry.Logger.Level >= PanicLevel {
entry.Panic(fmt.Sprintf(format, args...))
}
}
// Entry Println family functions
func (entry *Entry) Debugln(args ...interface{}) {
if entry.Logger.Level >= DebugLevel {
entry.Debug(entry.sprintlnn(args...))
}
}
func (entry *Entry) Infoln(args ...interface{}) {
if entry.Logger.Level >= InfoLevel {
entry.Info(entry.sprintlnn(args...))
}
}
func (entry *Entry) Println(args ...interface{}) {
entry.Infoln(args...)
}
func (entry *Entry) Warnln(args ...interface{}) {
if entry.Logger.Level >= WarnLevel {
entry.Warn(entry.sprintlnn(args...))
}
}
func (entry *Entry) Warningln(args ...interface{}) {
entry.Warnln(args...)
}
func (entry *Entry) Errorln(args ...interface{}) {
if entry.Logger.Level >= ErrorLevel {
entry.Error(entry.sprintlnn(args...))
}
}
func (entry *Entry) Fatalln(args ...interface{}) {
if entry.Logger.Level >= FatalLevel {
entry.Fatal(entry.sprintlnn(args...))
}
Exit(1)
}
func (entry *Entry) Panicln(args ...interface{}) {
if entry.Logger.Level >= PanicLevel {
entry.Panic(entry.sprintlnn(args...))
}
}
// Sprintlnn => Sprint no newline. This is to get the behavior of how
// fmt.Sprintln where spaces are always added between operands, regardless of
// their type. Instead of vendoring the Sprintln implementation to spare a
// string allocation, we do the simplest thing.
func (entry *Entry) sprintlnn(args ...interface{}) string {
msg := fmt.Sprintln(args...)
return msg[:len(msg)-1]
}

193
vendor/github.com/Sirupsen/logrus/exported.go generated vendored Normal file
View file

@ -0,0 +1,193 @@
package logrus
import (
"io"
)
var (
// std is the name of the standard logger in stdlib `log`
std = New()
)
func StandardLogger() *Logger {
return std
}
// SetOutput sets the standard logger output.
func SetOutput(out io.Writer) {
std.mu.Lock()
defer std.mu.Unlock()
std.Out = out
}
// SetFormatter sets the standard logger formatter.
func SetFormatter(formatter Formatter) {
std.mu.Lock()
defer std.mu.Unlock()
std.Formatter = formatter
}
// SetLevel sets the standard logger level.
func SetLevel(level Level) {
std.mu.Lock()
defer std.mu.Unlock()
std.Level = level
}
// GetLevel returns the standard logger level.
func GetLevel() Level {
std.mu.Lock()
defer std.mu.Unlock()
return std.Level
}
// AddHook adds a hook to the standard logger hooks.
func AddHook(hook Hook) {
std.mu.Lock()
defer std.mu.Unlock()
std.Hooks.Add(hook)
}
// WithError creates an entry from the standard logger and adds an error to it, using the value defined in ErrorKey as key.
func WithError(err error) *Entry {
return std.WithField(ErrorKey, err)
}
// WithField creates an entry from the standard logger and adds a field to
// it. If you want multiple fields, use `WithFields`.
//
// Note that it doesn't log until you call Debug, Print, Info, Warn, Fatal
// or Panic on the Entry it returns.
func WithField(key string, value interface{}) *Entry {
return std.WithField(key, value)
}
// WithFields creates an entry from the standard logger and adds multiple
// fields to it. This is simply a helper for `WithField`, invoking it
// once for each field.
//
// Note that it doesn't log until you call Debug, Print, Info, Warn, Fatal
// or Panic on the Entry it returns.
func WithFields(fields Fields) *Entry {
return std.WithFields(fields)
}
// Debug logs a message at level Debug on the standard logger.
func Debug(args ...interface{}) {
std.Debug(args...)
}
// Print logs a message at level Info on the standard logger.
func Print(args ...interface{}) {
std.Print(args...)
}
// Info logs a message at level Info on the standard logger.
func Info(args ...interface{}) {
std.Info(args...)
}
// Warn logs a message at level Warn on the standard logger.
func Warn(args ...interface{}) {
std.Warn(args...)
}
// Warning logs a message at level Warn on the standard logger.
func Warning(args ...interface{}) {
std.Warning(args...)
}
// Error logs a message at level Error on the standard logger.
func Error(args ...interface{}) {
std.Error(args...)
}
// Panic logs a message at level Panic on the standard logger.
func Panic(args ...interface{}) {
std.Panic(args...)
}
// Fatal logs a message at level Fatal on the standard logger.
func Fatal(args ...interface{}) {
std.Fatal(args...)
}
// Debugf logs a message at level Debug on the standard logger.
func Debugf(format string, args ...interface{}) {
std.Debugf(format, args...)
}
// Printf logs a message at level Info on the standard logger.
func Printf(format string, args ...interface{}) {
std.Printf(format, args...)
}
// Infof logs a message at level Info on the standard logger.
func Infof(format string, args ...interface{}) {
std.Infof(format, args...)
}
// Warnf logs a message at level Warn on the standard logger.
func Warnf(format string, args ...interface{}) {
std.Warnf(format, args...)
}
// Warningf logs a message at level Warn on the standard logger.
func Warningf(format string, args ...interface{}) {
std.Warningf(format, args...)
}
// Errorf logs a message at level Error on the standard logger.
func Errorf(format string, args ...interface{}) {
std.Errorf(format, args...)
}
// Panicf logs a message at level Panic on the standard logger.
func Panicf(format string, args ...interface{}) {
std.Panicf(format, args...)
}
// Fatalf logs a message at level Fatal on the standard logger.
func Fatalf(format string, args ...interface{}) {
std.Fatalf(format, args...)
}
// Debugln logs a message at level Debug on the standard logger.
func Debugln(args ...interface{}) {
std.Debugln(args...)
}
// Println logs a message at level Info on the standard logger.
func Println(args ...interface{}) {
std.Println(args...)
}
// Infoln logs a message at level Info on the standard logger.
func Infoln(args ...interface{}) {
std.Infoln(args...)
}
// Warnln logs a message at level Warn on the standard logger.
func Warnln(args ...interface{}) {
std.Warnln(args...)
}
// Warningln logs a message at level Warn on the standard logger.
func Warningln(args ...interface{}) {
std.Warningln(args...)
}
// Errorln logs a message at level Error on the standard logger.
func Errorln(args ...interface{}) {
std.Errorln(args...)
}
// Panicln logs a message at level Panic on the standard logger.
func Panicln(args ...interface{}) {
std.Panicln(args...)
}
// Fatalln logs a message at level Fatal on the standard logger.
func Fatalln(args ...interface{}) {
std.Fatalln(args...)
}

45
vendor/github.com/Sirupsen/logrus/formatter.go generated vendored Normal file
View file

@ -0,0 +1,45 @@
package logrus
import "time"
const DefaultTimestampFormat = time.RFC3339
// The Formatter interface is used to implement a custom Formatter. It takes an
// `Entry`. It exposes all the fields, including the default ones:
//
// * `entry.Data["msg"]`. The message passed from Info, Warn, Error ..
// * `entry.Data["time"]`. The timestamp.
// * `entry.Data["level"]. The level the entry was logged at.
//
// Any additional fields added with `WithField` or `WithFields` are also in
// `entry.Data`. Format is expected to return an array of bytes which are then
// logged to `logger.Out`.
type Formatter interface {
Format(*Entry) ([]byte, error)
}
// This is to not silently overwrite `time`, `msg` and `level` fields when
// dumping it. If this code wasn't there doing:
//
// logrus.WithField("level", 1).Info("hello")
//
// Would just silently drop the user provided level. Instead with this code
// it'll logged as:
//
// {"level": "info", "fields.level": 1, "msg": "hello", "time": "..."}
//
// It's not exported because it's still using Data in an opinionated way. It's to
// avoid code duplication between the two default formatters.
func prefixFieldClashes(data Fields) {
if t, ok := data["time"]; ok {
data["fields.time"] = t
}
if m, ok := data["msg"]; ok {
data["fields.msg"] = m
}
if l, ok := data["level"]; ok {
data["fields.level"] = l
}
}

34
vendor/github.com/Sirupsen/logrus/hooks.go generated vendored Normal file
View file

@ -0,0 +1,34 @@
package logrus
// A hook to be fired when logging on the logging levels returned from
// `Levels()` on your implementation of the interface. Note that this is not
// fired in a goroutine or a channel with workers, you should handle such
// functionality yourself if your call is non-blocking and you don't wish for
// the logging calls for levels returned from `Levels()` to block.
type Hook interface {
Levels() []Level
Fire(*Entry) error
}
// Internal type for storing the hooks on a logger instance.
type LevelHooks map[Level][]Hook
// Add a hook to an instance of logger. This is called with
// `log.Hooks.Add(new(MyHook))` where `MyHook` implements the `Hook` interface.
func (hooks LevelHooks) Add(hook Hook) {
for _, level := range hook.Levels() {
hooks[level] = append(hooks[level], hook)
}
}
// Fire all the hooks for the passed level. Used by `entry.log` to fire
// appropriate hooks for a log entry.
func (hooks LevelHooks) Fire(level Level, entry *Entry) error {
for _, hook := range hooks[level] {
if err := hook.Fire(entry); err != nil {
return err
}
}
return nil
}

74
vendor/github.com/Sirupsen/logrus/json_formatter.go generated vendored Normal file
View file

@ -0,0 +1,74 @@
package logrus
import (
"encoding/json"
"fmt"
)
type fieldKey string
type FieldMap map[fieldKey]string
const (
FieldKeyMsg = "msg"
FieldKeyLevel = "level"
FieldKeyTime = "time"
)
func (f FieldMap) resolve(key fieldKey) string {
if k, ok := f[key]; ok {
return k
}
return string(key)
}
type JSONFormatter struct {
// TimestampFormat sets the format used for marshaling timestamps.
TimestampFormat string
// DisableTimestamp allows disabling automatic timestamps in output
DisableTimestamp bool
// FieldMap allows users to customize the names of keys for various fields.
// As an example:
// formatter := &JSONFormatter{
// FieldMap: FieldMap{
// FieldKeyTime: "@timestamp",
// FieldKeyLevel: "@level",
// FieldKeyLevel: "@message",
// },
// }
FieldMap FieldMap
}
func (f *JSONFormatter) Format(entry *Entry) ([]byte, error) {
data := make(Fields, len(entry.Data)+3)
for k, v := range entry.Data {
switch v := v.(type) {
case error:
// Otherwise errors are ignored by `encoding/json`
// https://github.com/Sirupsen/logrus/issues/137
data[k] = v.Error()
default:
data[k] = v
}
}
prefixFieldClashes(data)
timestampFormat := f.TimestampFormat
if timestampFormat == "" {
timestampFormat = DefaultTimestampFormat
}
if !f.DisableTimestamp {
data[f.FieldMap.resolve(FieldKeyTime)] = entry.Time.Format(timestampFormat)
}
data[f.FieldMap.resolve(FieldKeyMsg)] = entry.Message
data[f.FieldMap.resolve(FieldKeyLevel)] = entry.Level.String()
serialized, err := json.Marshal(data)
if err != nil {
return nil, fmt.Errorf("Failed to marshal fields to JSON, %v", err)
}
return append(serialized, '\n'), nil
}

308
vendor/github.com/Sirupsen/logrus/logger.go generated vendored Normal file
View file

@ -0,0 +1,308 @@
package logrus
import (
"io"
"os"
"sync"
)
type Logger struct {
// The logs are `io.Copy`'d to this in a mutex. It's common to set this to a
// file, or leave it default which is `os.Stderr`. You can also set this to
// something more adventorous, such as logging to Kafka.
Out io.Writer
// Hooks for the logger instance. These allow firing events based on logging
// levels and log entries. For example, to send errors to an error tracking
// service, log to StatsD or dump the core on fatal errors.
Hooks LevelHooks
// All log entries pass through the formatter before logged to Out. The
// included formatters are `TextFormatter` and `JSONFormatter` for which
// TextFormatter is the default. In development (when a TTY is attached) it
// logs with colors, but to a file it wouldn't. You can easily implement your
// own that implements the `Formatter` interface, see the `README` or included
// formatters for examples.
Formatter Formatter
// The logging level the logger should log at. This is typically (and defaults
// to) `logrus.Info`, which allows Info(), Warn(), Error() and Fatal() to be
// logged. `logrus.Debug` is useful in
Level Level
// Used to sync writing to the log. Locking is enabled by Default
mu MutexWrap
// Reusable empty entry
entryPool sync.Pool
}
type MutexWrap struct {
lock sync.Mutex
disabled bool
}
func (mw *MutexWrap) Lock() {
if !mw.disabled {
mw.lock.Lock()
}
}
func (mw *MutexWrap) Unlock() {
if !mw.disabled {
mw.lock.Unlock()
}
}
func (mw *MutexWrap) Disable() {
mw.disabled = true
}
// Creates a new logger. Configuration should be set by changing `Formatter`,
// `Out` and `Hooks` directly on the default logger instance. You can also just
// instantiate your own:
//
// var log = &Logger{
// Out: os.Stderr,
// Formatter: new(JSONFormatter),
// Hooks: make(LevelHooks),
// Level: logrus.DebugLevel,
// }
//
// It's recommended to make this a global instance called `log`.
func New() *Logger {
return &Logger{
Out: os.Stderr,
Formatter: new(TextFormatter),
Hooks: make(LevelHooks),
Level: InfoLevel,
}
}
func (logger *Logger) newEntry() *Entry {
entry, ok := logger.entryPool.Get().(*Entry)
if ok {
return entry
}
return NewEntry(logger)
}
func (logger *Logger) releaseEntry(entry *Entry) {
logger.entryPool.Put(entry)
}
// Adds a field to the log entry, note that it doesn't log until you call
// Debug, Print, Info, Warn, Fatal or Panic. It only creates a log entry.
// If you want multiple fields, use `WithFields`.
func (logger *Logger) WithField(key string, value interface{}) *Entry {
entry := logger.newEntry()
defer logger.releaseEntry(entry)
return entry.WithField(key, value)
}
// Adds a struct of fields to the log entry. All it does is call `WithField` for
// each `Field`.
func (logger *Logger) WithFields(fields Fields) *Entry {
entry := logger.newEntry()
defer logger.releaseEntry(entry)
return entry.WithFields(fields)
}
// Add an error as single field to the log entry. All it does is call
// `WithError` for the given `error`.
func (logger *Logger) WithError(err error) *Entry {
entry := logger.newEntry()
defer logger.releaseEntry(entry)
return entry.WithError(err)
}
func (logger *Logger) Debugf(format string, args ...interface{}) {
if logger.Level >= DebugLevel {
entry := logger.newEntry()
entry.Debugf(format, args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Infof(format string, args ...interface{}) {
if logger.Level >= InfoLevel {
entry := logger.newEntry()
entry.Infof(format, args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Printf(format string, args ...interface{}) {
entry := logger.newEntry()
entry.Printf(format, args...)
logger.releaseEntry(entry)
}
func (logger *Logger) Warnf(format string, args ...interface{}) {
if logger.Level >= WarnLevel {
entry := logger.newEntry()
entry.Warnf(format, args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Warningf(format string, args ...interface{}) {
if logger.Level >= WarnLevel {
entry := logger.newEntry()
entry.Warnf(format, args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Errorf(format string, args ...interface{}) {
if logger.Level >= ErrorLevel {
entry := logger.newEntry()
entry.Errorf(format, args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Fatalf(format string, args ...interface{}) {
if logger.Level >= FatalLevel {
entry := logger.newEntry()
entry.Fatalf(format, args...)
logger.releaseEntry(entry)
}
Exit(1)
}
func (logger *Logger) Panicf(format string, args ...interface{}) {
if logger.Level >= PanicLevel {
entry := logger.newEntry()
entry.Panicf(format, args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Debug(args ...interface{}) {
if logger.Level >= DebugLevel {
entry := logger.newEntry()
entry.Debug(args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Info(args ...interface{}) {
if logger.Level >= InfoLevel {
entry := logger.newEntry()
entry.Info(args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Print(args ...interface{}) {
entry := logger.newEntry()
entry.Info(args...)
logger.releaseEntry(entry)
}
func (logger *Logger) Warn(args ...interface{}) {
if logger.Level >= WarnLevel {
entry := logger.newEntry()
entry.Warn(args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Warning(args ...interface{}) {
if logger.Level >= WarnLevel {
entry := logger.newEntry()
entry.Warn(args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Error(args ...interface{}) {
if logger.Level >= ErrorLevel {
entry := logger.newEntry()
entry.Error(args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Fatal(args ...interface{}) {
if logger.Level >= FatalLevel {
entry := logger.newEntry()
entry.Fatal(args...)
logger.releaseEntry(entry)
}
Exit(1)
}
func (logger *Logger) Panic(args ...interface{}) {
if logger.Level >= PanicLevel {
entry := logger.newEntry()
entry.Panic(args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Debugln(args ...interface{}) {
if logger.Level >= DebugLevel {
entry := logger.newEntry()
entry.Debugln(args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Infoln(args ...interface{}) {
if logger.Level >= InfoLevel {
entry := logger.newEntry()
entry.Infoln(args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Println(args ...interface{}) {
entry := logger.newEntry()
entry.Println(args...)
logger.releaseEntry(entry)
}
func (logger *Logger) Warnln(args ...interface{}) {
if logger.Level >= WarnLevel {
entry := logger.newEntry()
entry.Warnln(args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Warningln(args ...interface{}) {
if logger.Level >= WarnLevel {
entry := logger.newEntry()
entry.Warnln(args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Errorln(args ...interface{}) {
if logger.Level >= ErrorLevel {
entry := logger.newEntry()
entry.Errorln(args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Fatalln(args ...interface{}) {
if logger.Level >= FatalLevel {
entry := logger.newEntry()
entry.Fatalln(args...)
logger.releaseEntry(entry)
}
Exit(1)
}
func (logger *Logger) Panicln(args ...interface{}) {
if logger.Level >= PanicLevel {
entry := logger.newEntry()
entry.Panicln(args...)
logger.releaseEntry(entry)
}
}
//When file is opened with appending mode, it's safe to
//write concurrently to a file (within 4k message on Linux).
//In these cases user can choose to disable the lock.
func (logger *Logger) SetNoLock() {
logger.mu.Disable()
}

143
vendor/github.com/Sirupsen/logrus/logrus.go generated vendored Normal file
View file

@ -0,0 +1,143 @@
package logrus
import (
"fmt"
"log"
"strings"
)
// Fields type, used to pass to `WithFields`.
type Fields map[string]interface{}
// Level type
type Level uint8
// Convert the Level to a string. E.g. PanicLevel becomes "panic".
func (level Level) String() string {
switch level {
case DebugLevel:
return "debug"
case InfoLevel:
return "info"
case WarnLevel:
return "warning"
case ErrorLevel:
return "error"
case FatalLevel:
return "fatal"
case PanicLevel:
return "panic"
}
return "unknown"
}
// ParseLevel takes a string level and returns the Logrus log level constant.
func ParseLevel(lvl string) (Level, error) {
switch strings.ToLower(lvl) {
case "panic":
return PanicLevel, nil
case "fatal":
return FatalLevel, nil
case "error":
return ErrorLevel, nil
case "warn", "warning":
return WarnLevel, nil
case "info":
return InfoLevel, nil
case "debug":
return DebugLevel, nil
}
var l Level
return l, fmt.Errorf("not a valid logrus Level: %q", lvl)
}
// A constant exposing all logging levels
var AllLevels = []Level{
PanicLevel,
FatalLevel,
ErrorLevel,
WarnLevel,
InfoLevel,
DebugLevel,
}
// These are the different logging levels. You can set the logging level to log
// on your instance of logger, obtained with `logrus.New()`.
const (
// PanicLevel level, highest level of severity. Logs and then calls panic with the
// message passed to Debug, Info, ...
PanicLevel Level = iota
// FatalLevel level. Logs and then calls `os.Exit(1)`. It will exit even if the
// logging level is set to Panic.
FatalLevel
// ErrorLevel level. Logs. Used for errors that should definitely be noted.
// Commonly used for hooks to send errors to an error tracking service.
ErrorLevel
// WarnLevel level. Non-critical entries that deserve eyes.
WarnLevel
// InfoLevel level. General operational entries about what's going on inside the
// application.
InfoLevel
// DebugLevel level. Usually only enabled when debugging. Very verbose logging.
DebugLevel
)
// Won't compile if StdLogger can't be realized by a log.Logger
var (
_ StdLogger = &log.Logger{}
_ StdLogger = &Entry{}
_ StdLogger = &Logger{}
)
// StdLogger is what your logrus-enabled library should take, that way
// it'll accept a stdlib logger and a logrus logger. There's no standard
// interface, this is the closest we get, unfortunately.
type StdLogger interface {
Print(...interface{})
Printf(string, ...interface{})
Println(...interface{})
Fatal(...interface{})
Fatalf(string, ...interface{})
Fatalln(...interface{})
Panic(...interface{})
Panicf(string, ...interface{})
Panicln(...interface{})
}
// The FieldLogger interface generalizes the Entry and Logger types
type FieldLogger interface {
WithField(key string, value interface{}) *Entry
WithFields(fields Fields) *Entry
WithError(err error) *Entry
Debugf(format string, args ...interface{})
Infof(format string, args ...interface{})
Printf(format string, args ...interface{})
Warnf(format string, args ...interface{})
Warningf(format string, args ...interface{})
Errorf(format string, args ...interface{})
Fatalf(format string, args ...interface{})
Panicf(format string, args ...interface{})
Debug(args ...interface{})
Info(args ...interface{})
Print(args ...interface{})
Warn(args ...interface{})
Warning(args ...interface{})
Error(args ...interface{})
Fatal(args ...interface{})
Panic(args ...interface{})
Debugln(args ...interface{})
Infoln(args ...interface{})
Println(args ...interface{})
Warnln(args ...interface{})
Warningln(args ...interface{})
Errorln(args ...interface{})
Fatalln(args ...interface{})
Panicln(args ...interface{})
}

View file

@ -0,0 +1,8 @@
// +build appengine
package logrus
// IsTerminal returns true if stderr's file descriptor is a terminal.
func IsTerminal() bool {
return true
}

10
vendor/github.com/Sirupsen/logrus/terminal_bsd.go generated vendored Normal file
View file

@ -0,0 +1,10 @@
// +build darwin freebsd openbsd netbsd dragonfly
// +build !appengine
package logrus
import "syscall"
const ioctlReadTermios = syscall.TIOCGETA
type Termios syscall.Termios

14
vendor/github.com/Sirupsen/logrus/terminal_linux.go generated vendored Normal file
View file

@ -0,0 +1,14 @@
// Based on ssh/terminal:
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build !appengine
package logrus
import "syscall"
const ioctlReadTermios = syscall.TCGETS
type Termios syscall.Termios

View file

@ -0,0 +1,22 @@
// Based on ssh/terminal:
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build linux darwin freebsd openbsd netbsd dragonfly
// +build !appengine
package logrus
import (
"syscall"
"unsafe"
)
// IsTerminal returns true if stderr's file descriptor is a terminal.
func IsTerminal() bool {
fd := syscall.Stderr
var termios Termios
_, _, err := syscall.Syscall6(syscall.SYS_IOCTL, uintptr(fd), ioctlReadTermios, uintptr(unsafe.Pointer(&termios)), 0, 0, 0)
return err == 0
}

15
vendor/github.com/Sirupsen/logrus/terminal_solaris.go generated vendored Normal file
View file

@ -0,0 +1,15 @@
// +build solaris,!appengine
package logrus
import (
"os"
"golang.org/x/sys/unix"
)
// IsTerminal returns true if the given file descriptor is a terminal.
func IsTerminal() bool {
_, err := unix.IoctlGetTermios(int(os.Stdout.Fd()), unix.TCGETA)
return err == nil
}

27
vendor/github.com/Sirupsen/logrus/terminal_windows.go generated vendored Normal file
View file

@ -0,0 +1,27 @@
// Based on ssh/terminal:
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build windows,!appengine
package logrus
import (
"syscall"
"unsafe"
)
var kernel32 = syscall.NewLazyDLL("kernel32.dll")
var (
procGetConsoleMode = kernel32.NewProc("GetConsoleMode")
)
// IsTerminal returns true if stderr's file descriptor is a terminal.
func IsTerminal() bool {
fd := syscall.Stderr
var st uint32
r, _, e := syscall.Syscall(procGetConsoleMode.Addr(), 2, uintptr(fd), uintptr(unsafe.Pointer(&st)), 0)
return r != 0 && e == 0
}

168
vendor/github.com/Sirupsen/logrus/text_formatter.go generated vendored Normal file
View file

@ -0,0 +1,168 @@
package logrus
import (
"bytes"
"fmt"
"runtime"
"sort"
"strings"
"time"
)
const (
nocolor = 0
red = 31
green = 32
yellow = 33
blue = 34
gray = 37
)
var (
baseTimestamp time.Time
isTerminal bool
)
func init() {
baseTimestamp = time.Now()
isTerminal = IsTerminal()
}
func miniTS() int {
return int(time.Since(baseTimestamp) / time.Second)
}
type TextFormatter struct {
// Set to true to bypass checking for a TTY before outputting colors.
ForceColors bool
// Force disabling colors.
DisableColors bool
// Disable timestamp logging. useful when output is redirected to logging
// system that already adds timestamps.
DisableTimestamp bool
// Enable logging the full timestamp when a TTY is attached instead of just
// the time passed since beginning of execution.
FullTimestamp bool
// TimestampFormat to use for display when a full timestamp is printed
TimestampFormat string
// The fields are sorted by default for a consistent output. For applications
// that log extremely frequently and don't use the JSON formatter this may not
// be desired.
DisableSorting bool
}
func (f *TextFormatter) Format(entry *Entry) ([]byte, error) {
var b *bytes.Buffer
var keys []string = make([]string, 0, len(entry.Data))
for k := range entry.Data {
keys = append(keys, k)
}
if !f.DisableSorting {
sort.Strings(keys)
}
if entry.Buffer != nil {
b = entry.Buffer
} else {
b = &bytes.Buffer{}
}
prefixFieldClashes(entry.Data)
isColorTerminal := isTerminal && (runtime.GOOS != "windows")
isColored := (f.ForceColors || isColorTerminal) && !f.DisableColors
timestampFormat := f.TimestampFormat
if timestampFormat == "" {
timestampFormat = DefaultTimestampFormat
}
if isColored {
f.printColored(b, entry, keys, timestampFormat)
} else {
if !f.DisableTimestamp {
f.appendKeyValue(b, "time", entry.Time.Format(timestampFormat))
}
f.appendKeyValue(b, "level", entry.Level.String())
if entry.Message != "" {
f.appendKeyValue(b, "msg", entry.Message)
}
for _, key := range keys {
f.appendKeyValue(b, key, entry.Data[key])
}
}
b.WriteByte('\n')
return b.Bytes(), nil
}
func (f *TextFormatter) printColored(b *bytes.Buffer, entry *Entry, keys []string, timestampFormat string) {
var levelColor int
switch entry.Level {
case DebugLevel:
levelColor = gray
case WarnLevel:
levelColor = yellow
case ErrorLevel, FatalLevel, PanicLevel:
levelColor = red
default:
levelColor = blue
}
levelText := strings.ToUpper(entry.Level.String())[0:4]
if !f.FullTimestamp {
fmt.Fprintf(b, "\x1b[%dm%s\x1b[0m[%04d] %-44s ", levelColor, levelText, miniTS(), entry.Message)
} else {
fmt.Fprintf(b, "\x1b[%dm%s\x1b[0m[%s] %-44s ", levelColor, levelText, entry.Time.Format(timestampFormat), entry.Message)
}
for _, k := range keys {
v := entry.Data[k]
fmt.Fprintf(b, " \x1b[%dm%s\x1b[0m=", levelColor, k)
f.appendValue(b, v)
}
}
func needsQuoting(text string) bool {
for _, ch := range text {
if !((ch >= 'a' && ch <= 'z') ||
(ch >= 'A' && ch <= 'Z') ||
(ch >= '0' && ch <= '9') ||
ch == '-' || ch == '.') {
return true
}
}
return false
}
func (f *TextFormatter) appendKeyValue(b *bytes.Buffer, key string, value interface{}) {
b.WriteString(key)
b.WriteByte('=')
f.appendValue(b, value)
b.WriteByte(' ')
}
func (f *TextFormatter) appendValue(b *bytes.Buffer, value interface{}) {
switch value := value.(type) {
case string:
if !needsQuoting(value) {
b.WriteString(value)
} else {
fmt.Fprintf(b, "%q", value)
}
case error:
errmsg := value.Error()
if !needsQuoting(errmsg) {
b.WriteString(errmsg)
} else {
fmt.Fprintf(b, "%q", errmsg)
}
default:
fmt.Fprint(b, value)
}
}

53
vendor/github.com/Sirupsen/logrus/writer.go generated vendored Normal file
View file

@ -0,0 +1,53 @@
package logrus
import (
"bufio"
"io"
"runtime"
)
func (logger *Logger) Writer() *io.PipeWriter {
return logger.WriterLevel(InfoLevel)
}
func (logger *Logger) WriterLevel(level Level) *io.PipeWriter {
reader, writer := io.Pipe()
var printFunc func(args ...interface{})
switch level {
case DebugLevel:
printFunc = logger.Debug
case InfoLevel:
printFunc = logger.Info
case WarnLevel:
printFunc = logger.Warn
case ErrorLevel:
printFunc = logger.Error
case FatalLevel:
printFunc = logger.Fatal
case PanicLevel:
printFunc = logger.Panic
default:
printFunc = logger.Print
}
go logger.writerScanner(reader, printFunc)
runtime.SetFinalizer(writer, writerFinalizer)
return writer
}
func (logger *Logger) writerScanner(reader *io.PipeReader, printFunc func(args ...interface{})) {
scanner := bufio.NewScanner(reader)
for scanner.Scan() {
printFunc(scanner.Text())
}
if err := scanner.Err(); err != nil {
logger.Errorf("Error while reading from Writer: %s", err)
}
reader.Close()
}
func writerFinalizer(writer *io.PipeWriter) {
writer.Close()
}

24
vendor/github.com/andybalholm/cascadia/LICENSE generated vendored Executable file
View file

@ -0,0 +1,24 @@
Copyright (c) 2011 Andy Balholm. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

7
vendor/github.com/andybalholm/cascadia/README.md generated vendored Normal file
View file

@ -0,0 +1,7 @@
# cascadia
[![](https://travis-ci.org/andybalholm/cascadia.svg)](https://travis-ci.org/andybalholm/cascadia)
The Cascadia package implements CSS selectors for use with the parse trees produced by the html package.
To test CSS selectors without writing Go code, check out [cascadia](https://github.com/suntong/cascadia) the command line tool, a thin wrapper around this package.

835
vendor/github.com/andybalholm/cascadia/parser.go generated vendored Normal file
View file

@ -0,0 +1,835 @@
// Package cascadia is an implementation of CSS selectors.
package cascadia
import (
"errors"
"fmt"
"regexp"
"strconv"
"strings"
"golang.org/x/net/html"
)
// a parser for CSS selectors
type parser struct {
s string // the source text
i int // the current position
}
// parseEscape parses a backslash escape.
func (p *parser) parseEscape() (result string, err error) {
if len(p.s) < p.i+2 || p.s[p.i] != '\\' {
return "", errors.New("invalid escape sequence")
}
start := p.i + 1
c := p.s[start]
switch {
case c == '\r' || c == '\n' || c == '\f':
return "", errors.New("escaped line ending outside string")
case hexDigit(c):
// unicode escape (hex)
var i int
for i = start; i < p.i+6 && i < len(p.s) && hexDigit(p.s[i]); i++ {
// empty
}
v, _ := strconv.ParseUint(p.s[start:i], 16, 21)
if len(p.s) > i {
switch p.s[i] {
case '\r':
i++
if len(p.s) > i && p.s[i] == '\n' {
i++
}
case ' ', '\t', '\n', '\f':
i++
}
}
p.i = i
return string(rune(v)), nil
}
// Return the literal character after the backslash.
result = p.s[start : start+1]
p.i += 2
return result, nil
}
func hexDigit(c byte) bool {
return '0' <= c && c <= '9' || 'a' <= c && c <= 'f' || 'A' <= c && c <= 'F'
}
// nameStart returns whether c can be the first character of an identifier
// (not counting an initial hyphen, or an escape sequence).
func nameStart(c byte) bool {
return 'a' <= c && c <= 'z' || 'A' <= c && c <= 'Z' || c == '_' || c > 127
}
// nameChar returns whether c can be a character within an identifier
// (not counting an escape sequence).
func nameChar(c byte) bool {
return 'a' <= c && c <= 'z' || 'A' <= c && c <= 'Z' || c == '_' || c > 127 ||
c == '-' || '0' <= c && c <= '9'
}
// parseIdentifier parses an identifier.
func (p *parser) parseIdentifier() (result string, err error) {
startingDash := false
if len(p.s) > p.i && p.s[p.i] == '-' {
startingDash = true
p.i++
}
if len(p.s) <= p.i {
return "", errors.New("expected identifier, found EOF instead")
}
if c := p.s[p.i]; !(nameStart(c) || c == '\\') {
return "", fmt.Errorf("expected identifier, found %c instead", c)
}
result, err = p.parseName()
if startingDash && err == nil {
result = "-" + result
}
return
}
// parseName parses a name (which is like an identifier, but doesn't have
// extra restrictions on the first character).
func (p *parser) parseName() (result string, err error) {
i := p.i
loop:
for i < len(p.s) {
c := p.s[i]
switch {
case nameChar(c):
start := i
for i < len(p.s) && nameChar(p.s[i]) {
i++
}
result += p.s[start:i]
case c == '\\':
p.i = i
val, err := p.parseEscape()
if err != nil {
return "", err
}
i = p.i
result += val
default:
break loop
}
}
if result == "" {
return "", errors.New("expected name, found EOF instead")
}
p.i = i
return result, nil
}
// parseString parses a single- or double-quoted string.
func (p *parser) parseString() (result string, err error) {
i := p.i
if len(p.s) < i+2 {
return "", errors.New("expected string, found EOF instead")
}
quote := p.s[i]
i++
loop:
for i < len(p.s) {
switch p.s[i] {
case '\\':
if len(p.s) > i+1 {
switch c := p.s[i+1]; c {
case '\r':
if len(p.s) > i+2 && p.s[i+2] == '\n' {
i += 3
continue loop
}
fallthrough
case '\n', '\f':
i += 2
continue loop
}
}
p.i = i
val, err := p.parseEscape()
if err != nil {
return "", err
}
i = p.i
result += val
case quote:
break loop
case '\r', '\n', '\f':
return "", errors.New("unexpected end of line in string")
default:
start := i
for i < len(p.s) {
if c := p.s[i]; c == quote || c == '\\' || c == '\r' || c == '\n' || c == '\f' {
break
}
i++
}
result += p.s[start:i]
}
}
if i >= len(p.s) {
return "", errors.New("EOF in string")
}
// Consume the final quote.
i++
p.i = i
return result, nil
}
// parseRegex parses a regular expression; the end is defined by encountering an
// unmatched closing ')' or ']' which is not consumed
func (p *parser) parseRegex() (rx *regexp.Regexp, err error) {
i := p.i
if len(p.s) < i+2 {
return nil, errors.New("expected regular expression, found EOF instead")
}
// number of open parens or brackets;
// when it becomes negative, finished parsing regex
open := 0
loop:
for i < len(p.s) {
switch p.s[i] {
case '(', '[':
open++
case ')', ']':
open--
if open < 0 {
break loop
}
}
i++
}
if i >= len(p.s) {
return nil, errors.New("EOF in regular expression")
}
rx, err = regexp.Compile(p.s[p.i:i])
p.i = i
return rx, err
}
// skipWhitespace consumes whitespace characters and comments.
// It returns true if there was actually anything to skip.
func (p *parser) skipWhitespace() bool {
i := p.i
for i < len(p.s) {
switch p.s[i] {
case ' ', '\t', '\r', '\n', '\f':
i++
continue
case '/':
if strings.HasPrefix(p.s[i:], "/*") {
end := strings.Index(p.s[i+len("/*"):], "*/")
if end != -1 {
i += end + len("/**/")
continue
}
}
}
break
}
if i > p.i {
p.i = i
return true
}
return false
}
// consumeParenthesis consumes an opening parenthesis and any following
// whitespace. It returns true if there was actually a parenthesis to skip.
func (p *parser) consumeParenthesis() bool {
if p.i < len(p.s) && p.s[p.i] == '(' {
p.i++
p.skipWhitespace()
return true
}
return false
}
// consumeClosingParenthesis consumes a closing parenthesis and any preceding
// whitespace. It returns true if there was actually a parenthesis to skip.
func (p *parser) consumeClosingParenthesis() bool {
i := p.i
p.skipWhitespace()
if p.i < len(p.s) && p.s[p.i] == ')' {
p.i++
return true
}
p.i = i
return false
}
// parseTypeSelector parses a type selector (one that matches by tag name).
func (p *parser) parseTypeSelector() (result Selector, err error) {
tag, err := p.parseIdentifier()
if err != nil {
return nil, err
}
return typeSelector(tag), nil
}
// parseIDSelector parses a selector that matches by id attribute.
func (p *parser) parseIDSelector() (Selector, error) {
if p.i >= len(p.s) {
return nil, fmt.Errorf("expected id selector (#id), found EOF instead")
}
if p.s[p.i] != '#' {
return nil, fmt.Errorf("expected id selector (#id), found '%c' instead", p.s[p.i])
}
p.i++
id, err := p.parseName()
if err != nil {
return nil, err
}
return attributeEqualsSelector("id", id), nil
}
// parseClassSelector parses a selector that matches by class attribute.
func (p *parser) parseClassSelector() (Selector, error) {
if p.i >= len(p.s) {
return nil, fmt.Errorf("expected class selector (.class), found EOF instead")
}
if p.s[p.i] != '.' {
return nil, fmt.Errorf("expected class selector (.class), found '%c' instead", p.s[p.i])
}
p.i++
class, err := p.parseIdentifier()
if err != nil {
return nil, err
}
return attributeIncludesSelector("class", class), nil
}
// parseAttributeSelector parses a selector that matches by attribute value.
func (p *parser) parseAttributeSelector() (Selector, error) {
if p.i >= len(p.s) {
return nil, fmt.Errorf("expected attribute selector ([attribute]), found EOF instead")
}
if p.s[p.i] != '[' {
return nil, fmt.Errorf("expected attribute selector ([attribute]), found '%c' instead", p.s[p.i])
}
p.i++
p.skipWhitespace()
key, err := p.parseIdentifier()
if err != nil {
return nil, err
}
p.skipWhitespace()
if p.i >= len(p.s) {
return nil, errors.New("unexpected EOF in attribute selector")
}
if p.s[p.i] == ']' {
p.i++
return attributeExistsSelector(key), nil
}
if p.i+2 >= len(p.s) {
return nil, errors.New("unexpected EOF in attribute selector")
}
op := p.s[p.i : p.i+2]
if op[0] == '=' {
op = "="
} else if op[1] != '=' {
return nil, fmt.Errorf(`expected equality operator, found "%s" instead`, op)
}
p.i += len(op)
p.skipWhitespace()
if p.i >= len(p.s) {
return nil, errors.New("unexpected EOF in attribute selector")
}
var val string
var rx *regexp.Regexp
if op == "#=" {
rx, err = p.parseRegex()
} else {
switch p.s[p.i] {
case '\'', '"':
val, err = p.parseString()
default:
val, err = p.parseIdentifier()
}
}
if err != nil {
return nil, err
}
p.skipWhitespace()
if p.i >= len(p.s) {
return nil, errors.New("unexpected EOF in attribute selector")
}
if p.s[p.i] != ']' {
return nil, fmt.Errorf("expected ']', found '%c' instead", p.s[p.i])
}
p.i++
switch op {
case "=":
return attributeEqualsSelector(key, val), nil
case "!=":
return attributeNotEqualSelector(key, val), nil
case "~=":
return attributeIncludesSelector(key, val), nil
case "|=":
return attributeDashmatchSelector(key, val), nil
case "^=":
return attributePrefixSelector(key, val), nil
case "$=":
return attributeSuffixSelector(key, val), nil
case "*=":
return attributeSubstringSelector(key, val), nil
case "#=":
return attributeRegexSelector(key, rx), nil
}
return nil, fmt.Errorf("attribute operator %q is not supported", op)
}
var errExpectedParenthesis = errors.New("expected '(' but didn't find it")
var errExpectedClosingParenthesis = errors.New("expected ')' but didn't find it")
var errUnmatchedParenthesis = errors.New("unmatched '('")
// parsePseudoclassSelector parses a pseudoclass selector like :not(p).
func (p *parser) parsePseudoclassSelector() (Selector, error) {
if p.i >= len(p.s) {
return nil, fmt.Errorf("expected pseudoclass selector (:pseudoclass), found EOF instead")
}
if p.s[p.i] != ':' {
return nil, fmt.Errorf("expected attribute selector (:pseudoclass), found '%c' instead", p.s[p.i])
}
p.i++
name, err := p.parseIdentifier()
if err != nil {
return nil, err
}
name = toLowerASCII(name)
switch name {
case "not", "has", "haschild":
if !p.consumeParenthesis() {
return nil, errExpectedParenthesis
}
sel, parseErr := p.parseSelectorGroup()
if parseErr != nil {
return nil, parseErr
}
if !p.consumeClosingParenthesis() {
return nil, errExpectedClosingParenthesis
}
switch name {
case "not":
return negatedSelector(sel), nil
case "has":
return hasDescendantSelector(sel), nil
case "haschild":
return hasChildSelector(sel), nil
}
case "contains", "containsown":
if !p.consumeParenthesis() {
return nil, errExpectedParenthesis
}
if p.i == len(p.s) {
return nil, errUnmatchedParenthesis
}
var val string
switch p.s[p.i] {
case '\'', '"':
val, err = p.parseString()
default:
val, err = p.parseIdentifier()
}
if err != nil {
return nil, err
}
val = strings.ToLower(val)
p.skipWhitespace()
if p.i >= len(p.s) {
return nil, errors.New("unexpected EOF in pseudo selector")
}
if !p.consumeClosingParenthesis() {
return nil, errExpectedClosingParenthesis
}
switch name {
case "contains":
return textSubstrSelector(val), nil
case "containsown":
return ownTextSubstrSelector(val), nil
}
case "matches", "matchesown":
if !p.consumeParenthesis() {
return nil, errExpectedParenthesis
}
rx, err := p.parseRegex()
if err != nil {
return nil, err
}
if p.i >= len(p.s) {
return nil, errors.New("unexpected EOF in pseudo selector")
}
if !p.consumeClosingParenthesis() {
return nil, errExpectedClosingParenthesis
}
switch name {
case "matches":
return textRegexSelector(rx), nil
case "matchesown":
return ownTextRegexSelector(rx), nil
}
case "nth-child", "nth-last-child", "nth-of-type", "nth-last-of-type":
if !p.consumeParenthesis() {
return nil, errExpectedParenthesis
}
a, b, err := p.parseNth()
if err != nil {
return nil, err
}
if !p.consumeClosingParenthesis() {
return nil, errExpectedClosingParenthesis
}
if a == 0 {
switch name {
case "nth-child":
return simpleNthChildSelector(b, false), nil
case "nth-of-type":
return simpleNthChildSelector(b, true), nil
case "nth-last-child":
return simpleNthLastChildSelector(b, false), nil
case "nth-last-of-type":
return simpleNthLastChildSelector(b, true), nil
}
}
return nthChildSelector(a, b,
name == "nth-last-child" || name == "nth-last-of-type",
name == "nth-of-type" || name == "nth-last-of-type"),
nil
case "first-child":
return simpleNthChildSelector(1, false), nil
case "last-child":
return simpleNthLastChildSelector(1, false), nil
case "first-of-type":
return simpleNthChildSelector(1, true), nil
case "last-of-type":
return simpleNthLastChildSelector(1, true), nil
case "only-child":
return onlyChildSelector(false), nil
case "only-of-type":
return onlyChildSelector(true), nil
case "input":
return inputSelector, nil
case "empty":
return emptyElementSelector, nil
case "root":
return rootSelector, nil
}
return nil, fmt.Errorf("unknown pseudoclass :%s", name)
}
// parseInteger parses a decimal integer.
func (p *parser) parseInteger() (int, error) {
i := p.i
start := i
for i < len(p.s) && '0' <= p.s[i] && p.s[i] <= '9' {
i++
}
if i == start {
return 0, errors.New("expected integer, but didn't find it")
}
p.i = i
val, err := strconv.Atoi(p.s[start:i])
if err != nil {
return 0, err
}
return val, nil
}
// parseNth parses the argument for :nth-child (normally of the form an+b).
func (p *parser) parseNth() (a, b int, err error) {
// initial state
if p.i >= len(p.s) {
goto eof
}
switch p.s[p.i] {
case '-':
p.i++
goto negativeA
case '+':
p.i++
goto positiveA
case '0', '1', '2', '3', '4', '5', '6', '7', '8', '9':
goto positiveA
case 'n', 'N':
a = 1
p.i++
goto readN
case 'o', 'O', 'e', 'E':
id, nameErr := p.parseName()
if nameErr != nil {
return 0, 0, nameErr
}
id = toLowerASCII(id)
if id == "odd" {
return 2, 1, nil
}
if id == "even" {
return 2, 0, nil
}
return 0, 0, fmt.Errorf("expected 'odd' or 'even', but found '%s' instead", id)
default:
goto invalid
}
positiveA:
if p.i >= len(p.s) {
goto eof
}
switch p.s[p.i] {
case '0', '1', '2', '3', '4', '5', '6', '7', '8', '9':
a, err = p.parseInteger()
if err != nil {
return 0, 0, err
}
goto readA
case 'n', 'N':
a = 1
p.i++
goto readN
default:
goto invalid
}
negativeA:
if p.i >= len(p.s) {
goto eof
}
switch p.s[p.i] {
case '0', '1', '2', '3', '4', '5', '6', '7', '8', '9':
a, err = p.parseInteger()
if err != nil {
return 0, 0, err
}
a = -a
goto readA
case 'n', 'N':
a = -1
p.i++
goto readN
default:
goto invalid
}
readA:
if p.i >= len(p.s) {
goto eof
}
switch p.s[p.i] {
case 'n', 'N':
p.i++
goto readN
default:
// The number we read as a is actually b.
return 0, a, nil
}
readN:
p.skipWhitespace()
if p.i >= len(p.s) {
goto eof
}
switch p.s[p.i] {
case '+':
p.i++
p.skipWhitespace()
b, err = p.parseInteger()
if err != nil {
return 0, 0, err
}
return a, b, nil
case '-':
p.i++
p.skipWhitespace()
b, err = p.parseInteger()
if err != nil {
return 0, 0, err
}
return a, -b, nil
default:
return a, 0, nil
}
eof:
return 0, 0, errors.New("unexpected EOF while attempting to parse expression of form an+b")
invalid:
return 0, 0, errors.New("unexpected character while attempting to parse expression of form an+b")
}
// parseSimpleSelectorSequence parses a selector sequence that applies to
// a single element.
func (p *parser) parseSimpleSelectorSequence() (Selector, error) {
var result Selector
if p.i >= len(p.s) {
return nil, errors.New("expected selector, found EOF instead")
}
switch p.s[p.i] {
case '*':
// It's the universal selector. Just skip over it, since it doesn't affect the meaning.
p.i++
case '#', '.', '[', ':':
// There's no type selector. Wait to process the other till the main loop.
default:
r, err := p.parseTypeSelector()
if err != nil {
return nil, err
}
result = r
}
loop:
for p.i < len(p.s) {
var ns Selector
var err error
switch p.s[p.i] {
case '#':
ns, err = p.parseIDSelector()
case '.':
ns, err = p.parseClassSelector()
case '[':
ns, err = p.parseAttributeSelector()
case ':':
ns, err = p.parsePseudoclassSelector()
default:
break loop
}
if err != nil {
return nil, err
}
if result == nil {
result = ns
} else {
result = intersectionSelector(result, ns)
}
}
if result == nil {
result = func(n *html.Node) bool {
return n.Type == html.ElementNode
}
}
return result, nil
}
// parseSelector parses a selector that may include combinators.
func (p *parser) parseSelector() (result Selector, err error) {
p.skipWhitespace()
result, err = p.parseSimpleSelectorSequence()
if err != nil {
return
}
for {
var combinator byte
if p.skipWhitespace() {
combinator = ' '
}
if p.i >= len(p.s) {
return
}
switch p.s[p.i] {
case '+', '>', '~':
combinator = p.s[p.i]
p.i++
p.skipWhitespace()
case ',', ')':
// These characters can't begin a selector, but they can legally occur after one.
return
}
if combinator == 0 {
return
}
c, err := p.parseSimpleSelectorSequence()
if err != nil {
return nil, err
}
switch combinator {
case ' ':
result = descendantSelector(result, c)
case '>':
result = childSelector(result, c)
case '+':
result = siblingSelector(result, c, true)
case '~':
result = siblingSelector(result, c, false)
}
}
panic("unreachable")
}
// parseSelectorGroup parses a group of selectors, separated by commas.
func (p *parser) parseSelectorGroup() (result Selector, err error) {
result, err = p.parseSelector()
if err != nil {
return
}
for p.i < len(p.s) {
if p.s[p.i] != ',' {
return result, nil
}
p.i++
c, err := p.parseSelector()
if err != nil {
return nil, err
}
result = unionSelector(result, c)
}
return
}

622
vendor/github.com/andybalholm/cascadia/selector.go generated vendored Normal file
View file

@ -0,0 +1,622 @@
package cascadia
import (
"bytes"
"fmt"
"regexp"
"strings"
"golang.org/x/net/html"
)
// the Selector type, and functions for creating them
// A Selector is a function which tells whether a node matches or not.
type Selector func(*html.Node) bool
// hasChildMatch returns whether n has any child that matches a.
func hasChildMatch(n *html.Node, a Selector) bool {
for c := n.FirstChild; c != nil; c = c.NextSibling {
if a(c) {
return true
}
}
return false
}
// hasDescendantMatch performs a depth-first search of n's descendants,
// testing whether any of them match a. It returns true as soon as a match is
// found, or false if no match is found.
func hasDescendantMatch(n *html.Node, a Selector) bool {
for c := n.FirstChild; c != nil; c = c.NextSibling {
if a(c) || (c.Type == html.ElementNode && hasDescendantMatch(c, a)) {
return true
}
}
return false
}
// Compile parses a selector and returns, if successful, a Selector object
// that can be used to match against html.Node objects.
func Compile(sel string) (Selector, error) {
p := &parser{s: sel}
compiled, err := p.parseSelectorGroup()
if err != nil {
return nil, err
}
if p.i < len(sel) {
return nil, fmt.Errorf("parsing %q: %d bytes left over", sel, len(sel)-p.i)
}
return compiled, nil
}
// MustCompile is like Compile, but panics instead of returning an error.
func MustCompile(sel string) Selector {
compiled, err := Compile(sel)
if err != nil {
panic(err)
}
return compiled
}
// MatchAll returns a slice of the nodes that match the selector,
// from n and its children.
func (s Selector) MatchAll(n *html.Node) []*html.Node {
return s.matchAllInto(n, nil)
}
func (s Selector) matchAllInto(n *html.Node, storage []*html.Node) []*html.Node {
if s(n) {
storage = append(storage, n)
}
for child := n.FirstChild; child != nil; child = child.NextSibling {
storage = s.matchAllInto(child, storage)
}
return storage
}
// Match returns true if the node matches the selector.
func (s Selector) Match(n *html.Node) bool {
return s(n)
}
// MatchFirst returns the first node that matches s, from n and its children.
func (s Selector) MatchFirst(n *html.Node) *html.Node {
if s.Match(n) {
return n
}
for c := n.FirstChild; c != nil; c = c.NextSibling {
m := s.MatchFirst(c)
if m != nil {
return m
}
}
return nil
}
// Filter returns the nodes in nodes that match the selector.
func (s Selector) Filter(nodes []*html.Node) (result []*html.Node) {
for _, n := range nodes {
if s(n) {
result = append(result, n)
}
}
return result
}
// typeSelector returns a Selector that matches elements with a given tag name.
func typeSelector(tag string) Selector {
tag = toLowerASCII(tag)
return func(n *html.Node) bool {
return n.Type == html.ElementNode && n.Data == tag
}
}
// toLowerASCII returns s with all ASCII capital letters lowercased.
func toLowerASCII(s string) string {
var b []byte
for i := 0; i < len(s); i++ {
if c := s[i]; 'A' <= c && c <= 'Z' {
if b == nil {
b = make([]byte, len(s))
copy(b, s)
}
b[i] = s[i] + ('a' - 'A')
}
}
if b == nil {
return s
}
return string(b)
}
// attributeSelector returns a Selector that matches elements
// where the attribute named key satisifes the function f.
func attributeSelector(key string, f func(string) bool) Selector {
key = toLowerASCII(key)
return func(n *html.Node) bool {
if n.Type != html.ElementNode {
return false
}
for _, a := range n.Attr {
if a.Key == key && f(a.Val) {
return true
}
}
return false
}
}
// attributeExistsSelector returns a Selector that matches elements that have
// an attribute named key.
func attributeExistsSelector(key string) Selector {
return attributeSelector(key, func(string) bool { return true })
}
// attributeEqualsSelector returns a Selector that matches elements where
// the attribute named key has the value val.
func attributeEqualsSelector(key, val string) Selector {
return attributeSelector(key,
func(s string) bool {
return s == val
})
}
// attributeNotEqualSelector returns a Selector that matches elements where
// the attribute named key does not have the value val.
func attributeNotEqualSelector(key, val string) Selector {
key = toLowerASCII(key)
return func(n *html.Node) bool {
if n.Type != html.ElementNode {
return false
}
for _, a := range n.Attr {
if a.Key == key && a.Val == val {
return false
}
}
return true
}
}
// attributeIncludesSelector returns a Selector that matches elements where
// the attribute named key is a whitespace-separated list that includes val.
func attributeIncludesSelector(key, val string) Selector {
return attributeSelector(key,
func(s string) bool {
for s != "" {
i := strings.IndexAny(s, " \t\r\n\f")
if i == -1 {
return s == val
}
if s[:i] == val {
return true
}
s = s[i+1:]
}
return false
})
}
// attributeDashmatchSelector returns a Selector that matches elements where
// the attribute named key equals val or starts with val plus a hyphen.
func attributeDashmatchSelector(key, val string) Selector {
return attributeSelector(key,
func(s string) bool {
if s == val {
return true
}
if len(s) <= len(val) {
return false
}
if s[:len(val)] == val && s[len(val)] == '-' {
return true
}
return false
})
}
// attributePrefixSelector returns a Selector that matches elements where
// the attribute named key starts with val.
func attributePrefixSelector(key, val string) Selector {
return attributeSelector(key,
func(s string) bool {
if strings.TrimSpace(s) == "" {
return false
}
return strings.HasPrefix(s, val)
})
}
// attributeSuffixSelector returns a Selector that matches elements where
// the attribute named key ends with val.
func attributeSuffixSelector(key, val string) Selector {
return attributeSelector(key,
func(s string) bool {
if strings.TrimSpace(s) == "" {
return false
}
return strings.HasSuffix(s, val)
})
}
// attributeSubstringSelector returns a Selector that matches nodes where
// the attribute named key contains val.
func attributeSubstringSelector(key, val string) Selector {
return attributeSelector(key,
func(s string) bool {
if strings.TrimSpace(s) == "" {
return false
}
return strings.Contains(s, val)
})
}
// attributeRegexSelector returns a Selector that matches nodes where
// the attribute named key matches the regular expression rx
func attributeRegexSelector(key string, rx *regexp.Regexp) Selector {
return attributeSelector(key,
func(s string) bool {
return rx.MatchString(s)
})
}
// intersectionSelector returns a selector that matches nodes that match
// both a and b.
func intersectionSelector(a, b Selector) Selector {
return func(n *html.Node) bool {
return a(n) && b(n)
}
}
// unionSelector returns a selector that matches elements that match
// either a or b.
func unionSelector(a, b Selector) Selector {
return func(n *html.Node) bool {
return a(n) || b(n)
}
}
// negatedSelector returns a selector that matches elements that do not match a.
func negatedSelector(a Selector) Selector {
return func(n *html.Node) bool {
if n.Type != html.ElementNode {
return false
}
return !a(n)
}
}
// writeNodeText writes the text contained in n and its descendants to b.
func writeNodeText(n *html.Node, b *bytes.Buffer) {
switch n.Type {
case html.TextNode:
b.WriteString(n.Data)
case html.ElementNode:
for c := n.FirstChild; c != nil; c = c.NextSibling {
writeNodeText(c, b)
}
}
}
// nodeText returns the text contained in n and its descendants.
func nodeText(n *html.Node) string {
var b bytes.Buffer
writeNodeText(n, &b)
return b.String()
}
// nodeOwnText returns the contents of the text nodes that are direct
// children of n.
func nodeOwnText(n *html.Node) string {
var b bytes.Buffer
for c := n.FirstChild; c != nil; c = c.NextSibling {
if c.Type == html.TextNode {
b.WriteString(c.Data)
}
}
return b.String()
}
// textSubstrSelector returns a selector that matches nodes that
// contain the given text.
func textSubstrSelector(val string) Selector {
return func(n *html.Node) bool {
text := strings.ToLower(nodeText(n))
return strings.Contains(text, val)
}
}
// ownTextSubstrSelector returns a selector that matches nodes that
// directly contain the given text
func ownTextSubstrSelector(val string) Selector {
return func(n *html.Node) bool {
text := strings.ToLower(nodeOwnText(n))
return strings.Contains(text, val)
}
}
// textRegexSelector returns a selector that matches nodes whose text matches
// the specified regular expression
func textRegexSelector(rx *regexp.Regexp) Selector {
return func(n *html.Node) bool {
return rx.MatchString(nodeText(n))
}
}
// ownTextRegexSelector returns a selector that matches nodes whose text
// directly matches the specified regular expression
func ownTextRegexSelector(rx *regexp.Regexp) Selector {
return func(n *html.Node) bool {
return rx.MatchString(nodeOwnText(n))
}
}
// hasChildSelector returns a selector that matches elements
// with a child that matches a.
func hasChildSelector(a Selector) Selector {
return func(n *html.Node) bool {
if n.Type != html.ElementNode {
return false
}
return hasChildMatch(n, a)
}
}
// hasDescendantSelector returns a selector that matches elements
// with any descendant that matches a.
func hasDescendantSelector(a Selector) Selector {
return func(n *html.Node) bool {
if n.Type != html.ElementNode {
return false
}
return hasDescendantMatch(n, a)
}
}
// nthChildSelector returns a selector that implements :nth-child(an+b).
// If last is true, implements :nth-last-child instead.
// If ofType is true, implements :nth-of-type instead.
func nthChildSelector(a, b int, last, ofType bool) Selector {
return func(n *html.Node) bool {
if n.Type != html.ElementNode {
return false
}
parent := n.Parent
if parent == nil {
return false
}
if parent.Type == html.DocumentNode {
return false
}
i := -1
count := 0
for c := parent.FirstChild; c != nil; c = c.NextSibling {
if (c.Type != html.ElementNode) || (ofType && c.Data != n.Data) {
continue
}
count++
if c == n {
i = count
if !last {
break
}
}
}
if i == -1 {
// This shouldn't happen, since n should always be one of its parent's children.
return false
}
if last {
i = count - i + 1
}
i -= b
if a == 0 {
return i == 0
}
return i%a == 0 && i/a >= 0
}
}
// simpleNthChildSelector returns a selector that implements :nth-child(b).
// If ofType is true, implements :nth-of-type instead.
func simpleNthChildSelector(b int, ofType bool) Selector {
return func(n *html.Node) bool {
if n.Type != html.ElementNode {
return false
}
parent := n.Parent
if parent == nil {
return false
}
if parent.Type == html.DocumentNode {
return false
}
count := 0
for c := parent.FirstChild; c != nil; c = c.NextSibling {
if c.Type != html.ElementNode || (ofType && c.Data != n.Data) {
continue
}
count++
if c == n {
return count == b
}
if count >= b {
return false
}
}
return false
}
}
// simpleNthLastChildSelector returns a selector that implements
// :nth-last-child(b). If ofType is true, implements :nth-last-of-type
// instead.
func simpleNthLastChildSelector(b int, ofType bool) Selector {
return func(n *html.Node) bool {
if n.Type != html.ElementNode {
return false
}
parent := n.Parent
if parent == nil {
return false
}
if parent.Type == html.DocumentNode {
return false
}
count := 0
for c := parent.LastChild; c != nil; c = c.PrevSibling {
if c.Type != html.ElementNode || (ofType && c.Data != n.Data) {
continue
}
count++
if c == n {
return count == b
}
if count >= b {
return false
}
}
return false
}
}
// onlyChildSelector returns a selector that implements :only-child.
// If ofType is true, it implements :only-of-type instead.
func onlyChildSelector(ofType bool) Selector {
return func(n *html.Node) bool {
if n.Type != html.ElementNode {
return false
}
parent := n.Parent
if parent == nil {
return false
}
if parent.Type == html.DocumentNode {
return false
}
count := 0
for c := parent.FirstChild; c != nil; c = c.NextSibling {
if (c.Type != html.ElementNode) || (ofType && c.Data != n.Data) {
continue
}
count++
if count > 1 {
return false
}
}
return count == 1
}
}
// inputSelector is a Selector that matches input, select, textarea and button elements.
func inputSelector(n *html.Node) bool {
return n.Type == html.ElementNode && (n.Data == "input" || n.Data == "select" || n.Data == "textarea" || n.Data == "button")
}
// emptyElementSelector is a Selector that matches empty elements.
func emptyElementSelector(n *html.Node) bool {
if n.Type != html.ElementNode {
return false
}
for c := n.FirstChild; c != nil; c = c.NextSibling {
switch c.Type {
case html.ElementNode, html.TextNode:
return false
}
}
return true
}
// descendantSelector returns a Selector that matches an element if
// it matches d and has an ancestor that matches a.
func descendantSelector(a, d Selector) Selector {
return func(n *html.Node) bool {
if !d(n) {
return false
}
for p := n.Parent; p != nil; p = p.Parent {
if a(p) {
return true
}
}
return false
}
}
// childSelector returns a Selector that matches an element if
// it matches d and its parent matches a.
func childSelector(a, d Selector) Selector {
return func(n *html.Node) bool {
return d(n) && n.Parent != nil && a(n.Parent)
}
}
// siblingSelector returns a Selector that matches an element
// if it matches s2 and in is preceded by an element that matches s1.
// If adjacent is true, the sibling must be immediately before the element.
func siblingSelector(s1, s2 Selector, adjacent bool) Selector {
return func(n *html.Node) bool {
if !s2(n) {
return false
}
if adjacent {
for n = n.PrevSibling; n != nil; n = n.PrevSibling {
if n.Type == html.TextNode || n.Type == html.CommentNode {
continue
}
return s1(n)
}
return false
}
// Walk backwards looking for element that matches s1
for c := n.PrevSibling; c != nil; c = c.PrevSibling {
if s1(c) {
return true
}
}
return false
}
}
// rootSelector implements :root
func rootSelector(n *html.Node) bool {
if n.Type != html.ElementNode {
return false
}
if n.Parent == nil {
return false
}
return n.Parent.Type == html.DocumentNode
}

22
vendor/github.com/aymerick/douceur/LICENSE generated vendored Normal file
View file

@ -0,0 +1,22 @@
The MIT License (MIT)
Copyright (c) 2015 Aymerick JEHANNE
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

60
vendor/github.com/aymerick/douceur/css/declaration.go generated vendored Normal file
View file

@ -0,0 +1,60 @@
package css
import "fmt"
// Declaration represents a parsed style property
type Declaration struct {
Property string
Value string
Important bool
}
// NewDeclaration instanciates a new Declaration
func NewDeclaration() *Declaration {
return &Declaration{}
}
// Returns string representation of the Declaration
func (decl *Declaration) String() string {
return decl.StringWithImportant(true)
}
// StringWithImportant returns string representation with optional !important part
func (decl *Declaration) StringWithImportant(option bool) string {
result := fmt.Sprintf("%s: %s", decl.Property, decl.Value)
if option && decl.Important {
result += " !important"
}
result += ";"
return result
}
// Equal returns true if both Declarations are equals
func (decl *Declaration) Equal(other *Declaration) bool {
return (decl.Property == other.Property) && (decl.Value == other.Value) && (decl.Important == other.Important)
}
//
// DeclarationsByProperty
//
// DeclarationsByProperty represents sortable style declarations
type DeclarationsByProperty []*Declaration
// Implements sort.Interface
func (declarations DeclarationsByProperty) Len() int {
return len(declarations)
}
// Implements sort.Interface
func (declarations DeclarationsByProperty) Swap(i, j int) {
declarations[i], declarations[j] = declarations[j], declarations[i]
}
// Implements sort.Interface
func (declarations DeclarationsByProperty) Less(i, j int) bool {
return declarations[i].Property < declarations[j].Property
}

230
vendor/github.com/aymerick/douceur/css/rule.go generated vendored Normal file
View file

@ -0,0 +1,230 @@
package css
import (
"fmt"
"strings"
)
const (
indentSpace = 2
)
// RuleKind represents a Rule kind
type RuleKind int
// Rule kinds
const (
QualifiedRule RuleKind = iota
AtRule
)
// At Rules than have Rules inside their block instead of Declarations
var atRulesWithRulesBlock = []string{
"@document", "@font-feature-values", "@keyframes", "@media", "@supports",
}
// Rule represents a parsed CSS rule
type Rule struct {
Kind RuleKind
// At Rule name (eg: "@media")
Name string
// Raw prelude
Prelude string
// Qualified Rule selectors parsed from prelude
Selectors []string
// Style properties
Declarations []*Declaration
// At Rule embedded rules
Rules []*Rule
// Current rule embedding level
EmbedLevel int
}
// NewRule instanciates a new Rule
func NewRule(kind RuleKind) *Rule {
return &Rule{
Kind: kind,
}
}
// Returns string representation of rule kind
func (kind RuleKind) String() string {
switch kind {
case QualifiedRule:
return "Qualified Rule"
case AtRule:
return "At Rule"
default:
return "WAT"
}
}
// EmbedsRules returns true if this rule embeds another rules
func (rule *Rule) EmbedsRules() bool {
if rule.Kind == AtRule {
for _, atRuleName := range atRulesWithRulesBlock {
if rule.Name == atRuleName {
return true
}
}
}
return false
}
// Equal returns true if both rules are equals
func (rule *Rule) Equal(other *Rule) bool {
if (rule.Kind != other.Kind) ||
(rule.Prelude != other.Prelude) ||
(rule.Name != other.Name) {
return false
}
if (len(rule.Selectors) != len(other.Selectors)) ||
(len(rule.Declarations) != len(other.Declarations)) ||
(len(rule.Rules) != len(other.Rules)) {
return false
}
for i, sel := range rule.Selectors {
if sel != other.Selectors[i] {
return false
}
}
for i, decl := range rule.Declarations {
if !decl.Equal(other.Declarations[i]) {
return false
}
}
for i, rule := range rule.Rules {
if !rule.Equal(other.Rules[i]) {
return false
}
}
return true
}
// Diff returns a string representation of rules differences
func (rule *Rule) Diff(other *Rule) []string {
result := []string{}
if rule.Kind != other.Kind {
result = append(result, fmt.Sprintf("Kind: %s | %s", rule.Kind.String(), other.Kind.String()))
}
if rule.Prelude != other.Prelude {
result = append(result, fmt.Sprintf("Prelude: \"%s\" | \"%s\"", rule.Prelude, other.Prelude))
}
if rule.Name != other.Name {
result = append(result, fmt.Sprintf("Name: \"%s\" | \"%s\"", rule.Name, other.Name))
}
if len(rule.Selectors) != len(other.Selectors) {
result = append(result, fmt.Sprintf("Selectors: %v | %v", strings.Join(rule.Selectors, ", "), strings.Join(other.Selectors, ", ")))
} else {
for i, sel := range rule.Selectors {
if sel != other.Selectors[i] {
result = append(result, fmt.Sprintf("Selector: \"%s\" | \"%s\"", sel, other.Selectors[i]))
}
}
}
if len(rule.Declarations) != len(other.Declarations) {
result = append(result, fmt.Sprintf("Declarations Nb: %d | %d", len(rule.Declarations), len(other.Declarations)))
} else {
for i, decl := range rule.Declarations {
if !decl.Equal(other.Declarations[i]) {
result = append(result, fmt.Sprintf("Declaration: \"%s\" | \"%s\"", decl.String(), other.Declarations[i].String()))
}
}
}
if len(rule.Rules) != len(other.Rules) {
result = append(result, fmt.Sprintf("Rules Nb: %d | %d", len(rule.Rules), len(other.Rules)))
} else {
for i, rule := range rule.Rules {
if !rule.Equal(other.Rules[i]) {
result = append(result, fmt.Sprintf("Rule: \"%s\" | \"%s\"", rule.String(), other.Rules[i].String()))
}
}
}
return result
}
// Returns the string representation of a rule
func (rule *Rule) String() string {
result := ""
if rule.Kind == QualifiedRule {
for i, sel := range rule.Selectors {
if i != 0 {
result += ", "
}
result += sel
}
} else {
// AtRule
result += fmt.Sprintf("%s", rule.Name)
if rule.Prelude != "" {
if result != "" {
result += " "
}
result += fmt.Sprintf("%s", rule.Prelude)
}
}
if (len(rule.Declarations) == 0) && (len(rule.Rules) == 0) {
result += ";"
} else {
result += " {\n"
if rule.EmbedsRules() {
for _, subRule := range rule.Rules {
result += fmt.Sprintf("%s%s\n", rule.indent(), subRule.String())
}
} else {
for _, decl := range rule.Declarations {
result += fmt.Sprintf("%s%s\n", rule.indent(), decl.String())
}
}
result += fmt.Sprintf("%s}", rule.indentEndBlock())
}
return result
}
// Returns identation spaces for declarations and rules
func (rule *Rule) indent() string {
result := ""
for i := 0; i < ((rule.EmbedLevel + 1) * indentSpace); i++ {
result += " "
}
return result
}
// Returns identation spaces for end of block character
func (rule *Rule) indentEndBlock() string {
result := ""
for i := 0; i < (rule.EmbedLevel * indentSpace); i++ {
result += " "
}
return result
}

25
vendor/github.com/aymerick/douceur/css/stylesheet.go generated vendored Normal file
View file

@ -0,0 +1,25 @@
package css
// Stylesheet represents a parsed stylesheet
type Stylesheet struct {
Rules []*Rule
}
// NewStylesheet instanciate a new Stylesheet
func NewStylesheet() *Stylesheet {
return &Stylesheet{}
}
// Returns string representation of the Stylesheet
func (sheet *Stylesheet) String() string {
result := ""
for _, rule := range sheet.Rules {
if result != "" {
result += "\n"
}
result += rule.String()
}
return result
}

181
vendor/github.com/aymerick/douceur/inliner/element.go generated vendored Normal file
View file

@ -0,0 +1,181 @@
package inliner
import (
"sort"
"github.com/PuerkitoBio/goquery"
"github.com/aymerick/douceur/css"
"github.com/aymerick/douceur/parser"
)
// Element represents a HTML element with matching CSS rules
type Element struct {
// The goquery handler
elt *goquery.Selection
// The style rules to apply on that element
styleRules []*StyleRule
}
// ElementAttr represents a HTML element attribute
type ElementAttr struct {
attr string
elements []string
}
// Index is style property name
var styleToAttr map[string]*ElementAttr
func init() {
// Borrowed from premailer:
// https://github.com/premailer/premailer/blob/master/lib/premailer/premailer.rb
styleToAttr = map[string]*ElementAttr{
"text-align": {
"align",
[]string{"h1", "h2", "h3", "h4", "h5", "h6", "p", "div", "blockquote", "tr", "th", "td"},
},
"background-color": {
"bgcolor",
[]string{"body", "table", "tr", "th", "td"},
},
"background-image": {
"background",
[]string{"table"},
},
"vertical-align": {
"valign",
[]string{"th", "td"},
},
"float": {
"align",
[]string{"img"},
},
// @todo width and height ?
}
}
// NewElement instanciates a new element
func NewElement(elt *goquery.Selection) *Element {
return &Element{
elt: elt,
}
}
// Add a Style Rule to Element
func (element *Element) addStyleRule(styleRule *StyleRule) {
element.styleRules = append(element.styleRules, styleRule)
}
// Inline styles on element
func (element *Element) inline() error {
// compute declarations
declarations, err := element.computeDeclarations()
if err != nil {
return err
}
// set style attribute
styleValue := computeStyleValue(declarations)
if styleValue != "" {
element.elt.SetAttr("style", styleValue)
}
// set additionnal attributes
element.setAttributesFromStyle(declarations)
return nil
}
// Compute css declarations
func (element *Element) computeDeclarations() ([]*css.Declaration, error) {
result := []*css.Declaration{}
styles := make(map[string]*StyleDeclaration)
// First: parsed stylesheets rules
mergeStyleDeclarations(element.styleRules, styles)
// Then: inline rules
inlineRules, err := element.parseInlineStyle()
if err != nil {
return result, err
}
mergeStyleDeclarations(inlineRules, styles)
// map to array
for _, styleDecl := range styles {
result = append(result, styleDecl.Declaration)
}
// sort declarations by property name
sort.Sort(css.DeclarationsByProperty(result))
return result, nil
}
// Parse inline style rules
func (element *Element) parseInlineStyle() ([]*StyleRule, error) {
result := []*StyleRule{}
styleValue, exists := element.elt.Attr("style")
if (styleValue == "") || !exists {
return result, nil
}
declarations, err := parser.ParseDeclarations(styleValue)
if err != nil {
return result, err
}
result = append(result, NewStyleRule(inlineFakeSelector, declarations))
return result, nil
}
// Set additional attributes from style declarations
func (element *Element) setAttributesFromStyle(declarations []*css.Declaration) {
// for each style declarations
for _, declaration := range declarations {
if eltAttr := styleToAttr[declaration.Property]; eltAttr != nil {
// check if element is allowed for that attribute
for _, eltAllowed := range eltAttr.elements {
if element.elt.Nodes[0].Data == eltAllowed {
element.elt.SetAttr(eltAttr.attr, declaration.Value)
break
}
}
}
}
}
// helper
func computeStyleValue(declarations []*css.Declaration) string {
result := ""
// set style attribute value
for _, declaration := range declarations {
if result != "" {
result += " "
}
result += declaration.StringWithImportant(false)
}
return result
}
// helper
func mergeStyleDeclarations(styleRules []*StyleRule, output map[string]*StyleDeclaration) {
for _, styleRule := range styleRules {
for _, declaration := range styleRule.Declarations {
styleDecl := NewStyleDeclaration(styleRule, declaration)
if (output[declaration.Property] == nil) || (styleDecl.Specificity() >= output[declaration.Property].Specificity()) {
output[declaration.Property] = styleDecl
}
}
}
}

243
vendor/github.com/aymerick/douceur/inliner/inliner.go generated vendored Normal file
View file

@ -0,0 +1,243 @@
package inliner
import (
"fmt"
"strconv"
"strings"
"github.com/PuerkitoBio/goquery"
"github.com/aymerick/douceur/css"
"github.com/aymerick/douceur/parser"
"golang.org/x/net/html"
)
const (
eltMarkerAttr = "douceur-mark"
)
var unsupportedSelectors = []string{
":active", ":after", ":before", ":checked", ":disabled", ":enabled",
":first-line", ":first-letter", ":focus", ":hover", ":invalid", ":in-range",
":lang", ":link", ":root", ":selection", ":target", ":valid", ":visited"}
// Inliner presents a CSS Inliner
type Inliner struct {
// Raw HTML
html string
// Parsed HTML document
doc *goquery.Document
// Parsed stylesheets
stylesheets []*css.Stylesheet
// Collected inlinable style rules
rules []*StyleRule
// HTML elements matching collected inlinable style rules
elements map[string]*Element
// CSS rules that are not inlinable but that must be inserted in output document
rawRules []fmt.Stringer
// current element marker value
eltMarker int
}
// NewInliner instanciates a new Inliner
func NewInliner(html string) *Inliner {
return &Inliner{
html: html,
elements: make(map[string]*Element),
}
}
// Inline inlines css into html document
func Inline(html string) (string, error) {
result, err := NewInliner(html).Inline()
if err != nil {
return "", err
}
return result, nil
}
// Inline inlines CSS and returns HTML
func (inliner *Inliner) Inline() (string, error) {
// parse HTML document
if err := inliner.parseHTML(); err != nil {
return "", err
}
// parse stylesheets
if err := inliner.parseStylesheets(); err != nil {
return "", err
}
// collect elements and style rules
inliner.collectElementsAndRules()
// inline css
if err := inliner.inlineStyleRules(); err != nil {
return "", err
}
// insert raw stylesheet
inliner.insertRawStylesheet()
// generate HTML document
return inliner.genHTML()
}
// Parses raw html
func (inliner *Inliner) parseHTML() error {
doc, err := goquery.NewDocumentFromReader(strings.NewReader(inliner.html))
if err != nil {
return err
}
inliner.doc = doc
return nil
}
// Parses and removes stylesheets from HTML document
func (inliner *Inliner) parseStylesheets() error {
var result error
inliner.doc.Find("style").EachWithBreak(func(i int, s *goquery.Selection) bool {
stylesheet, err := parser.Parse(s.Text())
if err != nil {
result = err
return false
}
inliner.stylesheets = append(inliner.stylesheets, stylesheet)
// removes parsed stylesheet
s.Remove()
return true
})
return result
}
// Collects HTML elements matching parsed stylesheets, and thus collect used style rules
func (inliner *Inliner) collectElementsAndRules() {
for _, stylesheet := range inliner.stylesheets {
for _, rule := range stylesheet.Rules {
if rule.Kind == css.QualifiedRule {
// Let's go!
inliner.handleQualifiedRule(rule)
} else {
// Keep it 'as is'
inliner.rawRules = append(inliner.rawRules, rule)
}
}
}
}
// Handles parsed qualified rule
func (inliner *Inliner) handleQualifiedRule(rule *css.Rule) {
for _, selector := range rule.Selectors {
if Inlinable(selector) {
inliner.doc.Find(selector).Each(func(i int, s *goquery.Selection) {
// get marker
eltMarker, exists := s.Attr(eltMarkerAttr)
if !exists {
// mark element
eltMarker = strconv.Itoa(inliner.eltMarker)
s.SetAttr(eltMarkerAttr, eltMarker)
inliner.eltMarker++
// add new element
inliner.elements[eltMarker] = NewElement(s)
}
// add style rule for element
inliner.elements[eltMarker].addStyleRule(NewStyleRule(selector, rule.Declarations))
})
} else {
// Keep it 'as is'
inliner.rawRules = append(inliner.rawRules, NewStyleRule(selector, rule.Declarations))
}
}
}
// Inline style rules in HTML document
func (inliner *Inliner) inlineStyleRules() error {
for _, element := range inliner.elements {
// remove marker
element.elt.RemoveAttr(eltMarkerAttr)
// inline element
err := element.inline()
if err != nil {
return err
}
}
return nil
}
// Computes raw CSS rules
func (inliner *Inliner) computeRawCSS() string {
result := ""
for _, rawRule := range inliner.rawRules {
result += rawRule.String()
result += "\n"
}
return result
}
// Insert raw CSS rules into HTML document
func (inliner *Inliner) insertRawStylesheet() {
rawCSS := inliner.computeRawCSS()
if rawCSS != "" {
// create <style> element
cssNode := &html.Node{
Type: html.TextNode,
Data: "\n" + rawCSS,
}
styleNode := &html.Node{
Type: html.ElementNode,
Data: "style",
Attr: []html.Attribute{{Key: "type", Val: "text/css"}},
}
styleNode.AppendChild(cssNode)
// append to <head> element
headNode := inliner.doc.Find("head")
if headNode == nil {
// @todo Create head node !
panic("NOT IMPLEMENTED: create missing <head> node")
}
headNode.AppendNodes(styleNode)
}
}
// Generates HTML
func (inliner *Inliner) genHTML() (string, error) {
return inliner.doc.Html()
}
// Inlinable returns true if given selector is inlinable
func Inlinable(selector string) bool {
if strings.Contains(selector, "::") {
return false
}
for _, badSel := range unsupportedSelectors {
if strings.Contains(selector, badSel) {
return false
}
}
return true
}

View file

@ -0,0 +1,26 @@
package inliner
import "github.com/aymerick/douceur/css"
// StyleDeclaration represents a style declaration
type StyleDeclaration struct {
StyleRule *StyleRule
Declaration *css.Declaration
}
// NewStyleDeclaration instanciates a new StyleDeclaration
func NewStyleDeclaration(styleRule *StyleRule, declaration *css.Declaration) *StyleDeclaration {
return &StyleDeclaration{
StyleRule: styleRule,
Declaration: declaration,
}
}
// Specificity computes style declaration specificity
func (styleDecl *StyleDeclaration) Specificity() int {
if styleDecl.Declaration.Important {
return 10000
}
return styleDecl.StyleRule.Specificity
}

View file

@ -0,0 +1,87 @@
package inliner
import (
"fmt"
"regexp"
"strings"
"github.com/aymerick/douceur/css"
)
const (
inlineFakeSelector = "*INLINE*"
// Regular expressions borrowed from premailer:
// https://github.com/premailer/css_parser/blob/master/lib/css_parser/regexps.rb
nonIDAttributesAndPseudoClassesRegexpConst = `(?i)(\.[\w]+)|\[(\w+)|(\:(link|visited|active|hover|focus|lang|target|enabled|disabled|checked|indeterminate|root|nth-child|nth-last-child|nth-of-type|nth-last-of-type|first-child|last-child|first-of-type|last-of-type|only-child|only-of-type|empty|contains))`
elementsAndPseudoElementsRegexpConst = `(?i)((^|[\s\+\>\~]+)[\w]+|\:{1,2}(after|before|first-letter|first-line|selection))`
)
var (
nonIDAttrAndPseudoClassesRegexp *regexp.Regexp
elementsAndPseudoElementsRegexp *regexp.Regexp
)
// StyleRule represents a Qualifier Rule for a uniq selector
type StyleRule struct {
// The style rule selector
Selector string
// The style rule properties
Declarations []*css.Declaration
// Selector specificity
Specificity int
}
func init() {
nonIDAttrAndPseudoClassesRegexp, _ = regexp.Compile(nonIDAttributesAndPseudoClassesRegexpConst)
elementsAndPseudoElementsRegexp, _ = regexp.Compile(elementsAndPseudoElementsRegexpConst)
}
// NewStyleRule instanciates a new StyleRule
func NewStyleRule(selector string, declarations []*css.Declaration) *StyleRule {
return &StyleRule{
Selector: selector,
Declarations: declarations,
Specificity: ComputeSpecificity(selector),
}
}
// Returns the string representation of a style rule
func (styleRule *StyleRule) String() string {
result := ""
result += styleRule.Selector
if len(styleRule.Declarations) == 0 {
result += ";"
} else {
result += " {\n"
for _, decl := range styleRule.Declarations {
result += fmt.Sprintf(" %s\n", decl.String())
}
result += "}"
}
return result
}
// ComputeSpecificity computes style rule specificity
//
// cf. http://www.w3.org/TR/selectors/#specificity
func ComputeSpecificity(selector string) int {
result := 0
if selector == inlineFakeSelector {
result += 1000
}
result += 100 * strings.Count(selector, "#")
result += 10 * len(nonIDAttrAndPseudoClassesRegexp.FindAllStringSubmatch(selector, -1))
result += len(elementsAndPseudoElementsRegexp.FindAllStringSubmatch(selector, -1))
return result
}

409
vendor/github.com/aymerick/douceur/parser/parser.go generated vendored Normal file
View file

@ -0,0 +1,409 @@
package parser
import (
"errors"
"fmt"
"regexp"
"strings"
"github.com/gorilla/css/scanner"
"github.com/aymerick/douceur/css"
)
const (
importantSuffixRegexp = `(?i)\s*!important\s*$`
)
var (
importantRegexp *regexp.Regexp
)
// Parser represents a CSS parser
type Parser struct {
scan *scanner.Scanner // Tokenizer
// Tokens parsed but not consumed yet
tokens []*scanner.Token
// Rule embedding level
embedLevel int
}
func init() {
importantRegexp = regexp.MustCompile(importantSuffixRegexp)
}
// NewParser instanciates a new parser
func NewParser(txt string) *Parser {
return &Parser{
scan: scanner.New(txt),
}
}
// Parse parses a whole stylesheet
func Parse(text string) (*css.Stylesheet, error) {
result, err := NewParser(text).ParseStylesheet()
if err != nil {
return nil, err
}
return result, nil
}
// ParseDeclarations parses CSS declarations
func ParseDeclarations(text string) ([]*css.Declaration, error) {
result, err := NewParser(text).ParseDeclarations()
if err != nil {
return nil, err
}
return result, nil
}
// ParseStylesheet parses a stylesheet
func (parser *Parser) ParseStylesheet() (*css.Stylesheet, error) {
result := css.NewStylesheet()
// Parse BOM
if _, err := parser.parseBOM(); err != nil {
return result, err
}
// Parse list of rules
rules, err := parser.ParseRules()
if err != nil {
return result, err
}
result.Rules = rules
return result, nil
}
// ParseRules parses a list of rules
func (parser *Parser) ParseRules() ([]*css.Rule, error) {
result := []*css.Rule{}
inBlock := false
if parser.tokenChar("{") {
// parsing a block of rules
inBlock = true
parser.embedLevel++
parser.shiftToken()
}
for parser.tokenParsable() {
if parser.tokenIgnorable() {
parser.shiftToken()
} else if parser.tokenChar("}") {
if !inBlock {
errMsg := fmt.Sprintf("Unexpected } character: %s", parser.nextToken().String())
return result, errors.New(errMsg)
}
parser.shiftToken()
parser.embedLevel--
// finished
break
} else {
rule, err := parser.ParseRule()
if err != nil {
return result, err
}
rule.EmbedLevel = parser.embedLevel
result = append(result, rule)
}
}
return result, parser.err()
}
// ParseRule parses a rule
func (parser *Parser) ParseRule() (*css.Rule, error) {
if parser.tokenAtKeyword() {
return parser.parseAtRule()
}
return parser.parseQualifiedRule()
}
// ParseDeclarations parses a list of declarations
func (parser *Parser) ParseDeclarations() ([]*css.Declaration, error) {
result := []*css.Declaration{}
if parser.tokenChar("{") {
parser.shiftToken()
}
for parser.tokenParsable() {
if parser.tokenIgnorable() {
parser.shiftToken()
} else if parser.tokenChar("}") {
// end of block
parser.shiftToken()
break
} else {
declaration, err := parser.ParseDeclaration()
if err != nil {
return result, err
}
result = append(result, declaration)
}
}
return result, parser.err()
}
// ParseDeclaration parses a declaration
func (parser *Parser) ParseDeclaration() (*css.Declaration, error) {
result := css.NewDeclaration()
curValue := ""
for parser.tokenParsable() {
if parser.tokenChar(":") {
result.Property = strings.TrimSpace(curValue)
curValue = ""
parser.shiftToken()
} else if parser.tokenChar(";") || parser.tokenChar("}") {
if result.Property == "" {
errMsg := fmt.Sprintf("Unexpected ; character: %s", parser.nextToken().String())
return result, errors.New(errMsg)
}
if importantRegexp.MatchString(curValue) {
result.Important = true
curValue = importantRegexp.ReplaceAllString(curValue, "")
}
result.Value = strings.TrimSpace(curValue)
if parser.tokenChar(";") {
parser.shiftToken()
}
// finished
break
} else {
token := parser.shiftToken()
curValue += token.Value
}
}
// log.Printf("[parsed] Declaration: %s", result.String())
return result, parser.err()
}
// Parse an At Rule
func (parser *Parser) parseAtRule() (*css.Rule, error) {
// parse rule name (eg: "@import")
token := parser.shiftToken()
result := css.NewRule(css.AtRule)
result.Name = token.Value
for parser.tokenParsable() {
if parser.tokenChar(";") {
parser.shiftToken()
// finished
break
} else if parser.tokenChar("{") {
if result.EmbedsRules() {
// parse rules block
rules, err := parser.ParseRules()
if err != nil {
return result, err
}
result.Rules = rules
} else {
// parse declarations block
declarations, err := parser.ParseDeclarations()
if err != nil {
return result, err
}
result.Declarations = declarations
}
// finished
break
} else {
// parse prelude
prelude, err := parser.parsePrelude()
if err != nil {
return result, err
}
result.Prelude = prelude
}
}
// log.Printf("[parsed] Rule: %s", result.String())
return result, parser.err()
}
// Parse a Qualified Rule
func (parser *Parser) parseQualifiedRule() (*css.Rule, error) {
result := css.NewRule(css.QualifiedRule)
for parser.tokenParsable() {
if parser.tokenChar("{") {
if result.Prelude == "" {
errMsg := fmt.Sprintf("Unexpected { character: %s", parser.nextToken().String())
return result, errors.New(errMsg)
}
// parse declarations block
declarations, err := parser.ParseDeclarations()
if err != nil {
return result, err
}
result.Declarations = declarations
// finished
break
} else {
// parse prelude
prelude, err := parser.parsePrelude()
if err != nil {
return result, err
}
result.Prelude = prelude
}
}
result.Selectors = strings.Split(result.Prelude, ",")
for i, sel := range result.Selectors {
result.Selectors[i] = strings.TrimSpace(sel)
}
// log.Printf("[parsed] Rule: %s", result.String())
return result, parser.err()
}
// Parse Rule prelude
func (parser *Parser) parsePrelude() (string, error) {
result := ""
for parser.tokenParsable() && !parser.tokenEndOfPrelude() {
token := parser.shiftToken()
result += token.Value
}
result = strings.TrimSpace(result)
// log.Printf("[parsed] prelude: %s", result)
return result, parser.err()
}
// Parse BOM
func (parser *Parser) parseBOM() (bool, error) {
if parser.nextToken().Type == scanner.TokenBOM {
parser.shiftToken()
return true, nil
}
return false, parser.err()
}
// Returns next token without removing it from tokens buffer
func (parser *Parser) nextToken() *scanner.Token {
if len(parser.tokens) == 0 {
// fetch next token
nextToken := parser.scan.Next()
// log.Printf("[token] %s => %v", nextToken.Type.String(), nextToken.Value)
// queue it
parser.tokens = append(parser.tokens, nextToken)
}
return parser.tokens[0]
}
// Returns next token and remove it from the tokens buffer
func (parser *Parser) shiftToken() *scanner.Token {
var result *scanner.Token
result, parser.tokens = parser.tokens[0], parser.tokens[1:]
return result
}
// Returns tokenizer error, or nil if no error
func (parser *Parser) err() error {
if parser.tokenError() {
token := parser.nextToken()
return fmt.Errorf("Tokenizer error: %s", token.String())
}
return nil
}
// Returns true if next token is Error
func (parser *Parser) tokenError() bool {
return parser.nextToken().Type == scanner.TokenError
}
// Returns true if next token is EOF
func (parser *Parser) tokenEOF() bool {
return parser.nextToken().Type == scanner.TokenEOF
}
// Returns true if next token is a whitespace
func (parser *Parser) tokenWS() bool {
return parser.nextToken().Type == scanner.TokenS
}
// Returns true if next token is a comment
func (parser *Parser) tokenComment() bool {
return parser.nextToken().Type == scanner.TokenComment
}
// Returns true if next token is a CDO or a CDC
func (parser *Parser) tokenCDOorCDC() bool {
switch parser.nextToken().Type {
case scanner.TokenCDO, scanner.TokenCDC:
return true
default:
return false
}
}
// Returns true if next token is ignorable
func (parser *Parser) tokenIgnorable() bool {
return parser.tokenWS() || parser.tokenComment() || parser.tokenCDOorCDC()
}
// Returns true if next token is parsable
func (parser *Parser) tokenParsable() bool {
return !parser.tokenEOF() && !parser.tokenError()
}
// Returns true if next token is an At Rule keyword
func (parser *Parser) tokenAtKeyword() bool {
return parser.nextToken().Type == scanner.TokenAtKeyword
}
// Returns true if next token is given character
func (parser *Parser) tokenChar(value string) bool {
token := parser.nextToken()
return (token.Type == scanner.TokenChar) && (token.Value == value)
}
// Returns true if next token marks the end of a prelude
func (parser *Parser) tokenEndOfPrelude() bool {
return parser.tokenChar(";") || parser.tokenChar("{")
}

46
vendor/github.com/aymerick/raymond/BENCHMARKS.md generated vendored Normal file
View file

@ -0,0 +1,46 @@
# Benchmarks
Hardware: MacBookPro11,1 - Intel Core i5 - 2,6 GHz - 8 Go RAM
With:
- handlebars.js #8cba84df119c317fcebc49fb285518542ca9c2d0
- raymond #7bbaaf50ed03c96b56687d7fa6c6e04e02375a98
## handlebars.js (ops/ms)
arguments 198 ±4 (5)
array-each 568 ±23 (5)
array-mustache 522 ±18 (4)
complex 71 ±7 (3)
data 67 ±2 (3)
depth-1 47 ±2 (3)
depth-2 14 ±1 (2)
object-mustache 1099 ±47 (5)
object 907 ±58 (4)
partial-recursion 46 ±3 (4)
partial 68 ±3 (3)
paths 1650 ±50 (3)
string 2552 ±157 (3)
subexpression 141 ±2 (4)
variables 2671 ±83 (4)
## raymond
BenchmarkArguments 200000 6642 ns/op 151 ops/ms
BenchmarkArrayEach 100000 19584 ns/op 51 ops/ms
BenchmarkArrayMustache 100000 17305 ns/op 58 ops/ms
BenchmarkComplex 30000 50270 ns/op 20 ops/ms
BenchmarkData 50000 25551 ns/op 39 ops/ms
BenchmarkDepth1 100000 20162 ns/op 50 ops/ms
BenchmarkDepth2 30000 47782 ns/op 21 ops/ms
BenchmarkObjectMustache 200000 7668 ns/op 130 ops/ms
BenchmarkObject 200000 8843 ns/op 113 ops/ms
BenchmarkPartialRecursion 50000 23139 ns/op 43 ops/ms
BenchmarkPartial 50000 31015 ns/op 32 ops/ms
BenchmarkPath 200000 8997 ns/op 111 ops/ms
BenchmarkString 1000000 1879 ns/op 532 ops/ms
BenchmarkSubExpression 300000 4935 ns/op 203 ops/ms
BenchmarkVariables 200000 6478 ns/op 154 ops/ms

33
vendor/github.com/aymerick/raymond/CHANGELOG.md generated vendored Normal file
View file

@ -0,0 +1,33 @@
# Raymond Changelog
### Raymond 2.0.1 _(June 01, 2016)_
- [BUGFIX] Removes data races [#3](https://github.com/aymerick/raymond/issues/3) - Thanks [@markbates](https://github.com/markbates)
### Raymond 2.0.0 _(May 01, 2016)_
- [BUGFIX] Fixes passing of context in helper options [#2](https://github.com/aymerick/raymond/issues/2) - Thanks [@GhostRussia](https://github.com/GhostRussia)
- [BREAKING] Renames and unexports constants:
- `handlebars.DUMP_TPL`
- `lexer.ESCAPED_ESCAPED_OPEN_MUSTACHE`
- `lexer.ESCAPED_OPEN_MUSTACHE`
- `lexer.OPEN_MUSTACHE`
- `lexer.CLOSE_MUSTACHE`
- `lexer.CLOSE_STRIP_MUSTACHE`
- `lexer.CLOSE_UNESCAPED_STRIP_MUSTACHE`
- `lexer.DUMP_TOKEN_POS`
- `lexer.DUMP_ALL_TOKENS_VAL`
### Raymond 1.1.0 _(June 15, 2015)_
- Permits templates references with lowercase versions of struct fields.
- Adds `ParseFile()` function.
- Adds `RegisterPartialFile()`, `RegisterPartialFiles()` and `Clone()` methods on `Template`.
- Helpers can now be struct methods.
- Ensures safe concurrent access to helpers and partials.
### Raymond 1.0.0 _(June 09, 2015)_
- This is the first release. Raymond supports almost all handlebars features. See https://github.com/aymerick/raymond#limitations for a list of differences with the javascript implementation.

22
vendor/github.com/aymerick/raymond/LICENSE generated vendored Normal file
View file

@ -0,0 +1,22 @@
The MIT License (MIT)
Copyright (c) 2015 Aymerick JEHANNE
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

1417
vendor/github.com/aymerick/raymond/README.md generated vendored Normal file

File diff suppressed because it is too large Load diff

1
vendor/github.com/aymerick/raymond/VERSION generated vendored Normal file
View file

@ -0,0 +1 @@
2.0.1

785
vendor/github.com/aymerick/raymond/ast/node.go generated vendored Normal file
View file

@ -0,0 +1,785 @@
// Package ast provides structures to represent a handlebars Abstract Syntax Tree, and a Visitor interface to visit that tree.
package ast
import (
"fmt"
"strconv"
)
// References:
// - https://github.com/wycats/handlebars.js/blob/master/lib/handlebars/compiler/ast.js
// - https://github.com/wycats/handlebars.js/blob/master/docs/compiler-api.md
// - https://github.com/golang/go/blob/master/src/text/template/parse/node.go
// Node is an element in the AST.
type Node interface {
// node type
Type() NodeType
// location of node in original input string
Location() Loc
// string representation, used for debugging
String() string
// accepts visitor
Accept(Visitor) interface{}
}
// Visitor is the interface to visit an AST.
type Visitor interface {
VisitProgram(*Program) interface{}
// statements
VisitMustache(*MustacheStatement) interface{}
VisitBlock(*BlockStatement) interface{}
VisitPartial(*PartialStatement) interface{}
VisitContent(*ContentStatement) interface{}
VisitComment(*CommentStatement) interface{}
// expressions
VisitExpression(*Expression) interface{}
VisitSubExpression(*SubExpression) interface{}
VisitPath(*PathExpression) interface{}
// literals
VisitString(*StringLiteral) interface{}
VisitBoolean(*BooleanLiteral) interface{}
VisitNumber(*NumberLiteral) interface{}
// miscellaneous
VisitHash(*Hash) interface{}
VisitHashPair(*HashPair) interface{}
}
// NodeType represents an AST Node type.
type NodeType int
// Type returns itself, and permits struct includers to satisfy that part of Node interface.
func (t NodeType) Type() NodeType {
return t
}
const (
// NodeProgram is the program node
NodeProgram NodeType = iota
// NodeMustache is the mustache statement node
NodeMustache
// NodeBlock is the block statement node
NodeBlock
// NodePartial is the partial statement node
NodePartial
// NodeContent is the content statement node
NodeContent
// NodeComment is the comment statement node
NodeComment
// NodeExpression is the expression node
NodeExpression
// NodeSubExpression is the subexpression node
NodeSubExpression
// NodePath is the expression path node
NodePath
// NodeBoolean is the literal boolean node
NodeBoolean
// NodeNumber is the literal number node
NodeNumber
// NodeString is the literal string node
NodeString
// NodeHash is the hash node
NodeHash
// NodeHashPair is the hash pair node
NodeHashPair
)
// Loc represents the position of a parsed node in source file.
type Loc struct {
Pos int // Byte position
Line int // Line number
}
// Location returns itself, and permits struct includers to satisfy that part of Node interface.
func (l Loc) Location() Loc {
return l
}
// Strip describes node whitespace management.
type Strip struct {
Open bool
Close bool
OpenStandalone bool
CloseStandalone bool
InlineStandalone bool
}
// NewStrip instanciates a Strip for given open and close mustaches.
func NewStrip(openStr, closeStr string) *Strip {
return &Strip{
Open: (len(openStr) > 2) && openStr[2] == '~',
Close: (len(closeStr) > 2) && closeStr[len(closeStr)-3] == '~',
}
}
// NewStripForStr instanciates a Strip for given tag.
func NewStripForStr(str string) *Strip {
return &Strip{
Open: (len(str) > 2) && str[2] == '~',
Close: (len(str) > 2) && str[len(str)-3] == '~',
}
}
// String returns a string representation of receiver that can be used for debugging.
func (s *Strip) String() string {
return fmt.Sprintf("Open: %t, Close: %t, OpenStandalone: %t, CloseStandalone: %t, InlineStandalone: %t", s.Open, s.Close, s.OpenStandalone, s.CloseStandalone, s.InlineStandalone)
}
//
// Program
//
// Program represents a program node.
type Program struct {
NodeType
Loc
Body []Node // [ Statement ... ]
BlockParams []string
Chained bool
// whitespace management
Strip *Strip
}
// NewProgram instanciates a new program node.
func NewProgram(pos int, line int) *Program {
return &Program{
NodeType: NodeProgram,
Loc: Loc{pos, line},
}
}
// String returns a string representation of receiver that can be used for debugging.
func (node *Program) String() string {
return fmt.Sprintf("Program{Pos: %d}", node.Loc.Pos)
}
// Accept is the receiver entry point for visitors.
func (node *Program) Accept(visitor Visitor) interface{} {
return visitor.VisitProgram(node)
}
// AddStatement adds given statement to program.
func (node *Program) AddStatement(statement Node) {
node.Body = append(node.Body, statement)
}
//
// Mustache Statement
//
// MustacheStatement represents a mustache node.
type MustacheStatement struct {
NodeType
Loc
Unescaped bool
Expression *Expression
// whitespace management
Strip *Strip
}
// NewMustacheStatement instanciates a new mustache node.
func NewMustacheStatement(pos int, line int, unescaped bool) *MustacheStatement {
return &MustacheStatement{
NodeType: NodeMustache,
Loc: Loc{pos, line},
Unescaped: unescaped,
}
}
// String returns a string representation of receiver that can be used for debugging.
func (node *MustacheStatement) String() string {
return fmt.Sprintf("Mustache{Pos: %d}", node.Loc.Pos)
}
// Accept is the receiver entry point for visitors.
func (node *MustacheStatement) Accept(visitor Visitor) interface{} {
return visitor.VisitMustache(node)
}
//
// Block Statement
//
// BlockStatement represents a block node.
type BlockStatement struct {
NodeType
Loc
Expression *Expression
Program *Program
Inverse *Program
// whitespace management
OpenStrip *Strip
InverseStrip *Strip
CloseStrip *Strip
}
// NewBlockStatement instanciates a new block node.
func NewBlockStatement(pos int, line int) *BlockStatement {
return &BlockStatement{
NodeType: NodeBlock,
Loc: Loc{pos, line},
}
}
// String returns a string representation of receiver that can be used for debugging.
func (node *BlockStatement) String() string {
return fmt.Sprintf("Block{Pos: %d}", node.Loc.Pos)
}
// Accept is the receiver entry point for visitors.
func (node *BlockStatement) Accept(visitor Visitor) interface{} {
return visitor.VisitBlock(node)
}
//
// Partial Statement
//
// PartialStatement represents a partial node.
type PartialStatement struct {
NodeType
Loc
Name Node // PathExpression | SubExpression
Params []Node // [ Expression ... ]
Hash *Hash
// whitespace management
Strip *Strip
Indent string
}
// NewPartialStatement instanciates a new partial node.
func NewPartialStatement(pos int, line int) *PartialStatement {
return &PartialStatement{
NodeType: NodePartial,
Loc: Loc{pos, line},
}
}
// String returns a string representation of receiver that can be used for debugging.
func (node *PartialStatement) String() string {
return fmt.Sprintf("Partial{Name:%s, Pos:%d}", node.Name, node.Loc.Pos)
}
// Accept is the receiver entry point for visitors.
func (node *PartialStatement) Accept(visitor Visitor) interface{} {
return visitor.VisitPartial(node)
}
//
// Content Statement
//
// ContentStatement represents a content node.
type ContentStatement struct {
NodeType
Loc
Value string
Original string
// whitespace management
RightStripped bool
LeftStripped bool
}
// NewContentStatement instanciates a new content node.
func NewContentStatement(pos int, line int, val string) *ContentStatement {
return &ContentStatement{
NodeType: NodeContent,
Loc: Loc{pos, line},
Value: val,
Original: val,
}
}
// String returns a string representation of receiver that can be used for debugging.
func (node *ContentStatement) String() string {
return fmt.Sprintf("Content{Value:'%s', Pos:%d}", node.Value, node.Loc.Pos)
}
// Accept is the receiver entry point for visitors.
func (node *ContentStatement) Accept(visitor Visitor) interface{} {
return visitor.VisitContent(node)
}
//
// Comment Statement
//
// CommentStatement represents a comment node.
type CommentStatement struct {
NodeType
Loc
Value string
// whitespace management
Strip *Strip
}
// NewCommentStatement instanciates a new comment node.
func NewCommentStatement(pos int, line int, val string) *CommentStatement {
return &CommentStatement{
NodeType: NodeComment,
Loc: Loc{pos, line},
Value: val,
}
}
// String returns a string representation of receiver that can be used for debugging.
func (node *CommentStatement) String() string {
return fmt.Sprintf("Comment{Value:'%s', Pos:%d}", node.Value, node.Loc.Pos)
}
// Accept is the receiver entry point for visitors.
func (node *CommentStatement) Accept(visitor Visitor) interface{} {
return visitor.VisitComment(node)
}
//
// Expression
//
// Expression represents an expression node.
type Expression struct {
NodeType
Loc
Path Node // PathExpression | StringLiteral | BooleanLiteral | NumberLiteral
Params []Node // [ Expression ... ]
Hash *Hash
}
// NewExpression instanciates a new expression node.
func NewExpression(pos int, line int) *Expression {
return &Expression{
NodeType: NodeExpression,
Loc: Loc{pos, line},
}
}
// String returns a string representation of receiver that can be used for debugging.
func (node *Expression) String() string {
return fmt.Sprintf("Expr{Path:%s, Pos:%d}", node.Path, node.Loc.Pos)
}
// Accept is the receiver entry point for visitors.
func (node *Expression) Accept(visitor Visitor) interface{} {
return visitor.VisitExpression(node)
}
// HelperName returns helper name, or an empty string if this expression can't be a helper.
func (node *Expression) HelperName() string {
path, ok := node.Path.(*PathExpression)
if !ok {
return ""
}
if path.Data || (len(path.Parts) != 1) || (path.Depth > 0) || path.Scoped {
return ""
}
return path.Parts[0]
}
// FieldPath returns path expression representing a field path, or nil if this is not a field path.
func (node *Expression) FieldPath() *PathExpression {
path, ok := node.Path.(*PathExpression)
if !ok {
return nil
}
return path
}
// LiteralStr returns the string representation of literal value, with a boolean set to false if this is not a literal.
func (node *Expression) LiteralStr() (string, bool) {
return LiteralStr(node.Path)
}
// Canonical returns the canonical form of expression node as a string.
func (node *Expression) Canonical() string {
if str, ok := HelperNameStr(node.Path); ok {
return str
}
return ""
}
// HelperNameStr returns the string representation of a helper name, with a boolean set to false if this is not a valid helper name.
//
// helperName : path | dataName | STRING | NUMBER | BOOLEAN | UNDEFINED | NULL
func HelperNameStr(node Node) (string, bool) {
// PathExpression
if str, ok := PathExpressionStr(node); ok {
return str, ok
}
// Literal
if str, ok := LiteralStr(node); ok {
return str, ok
}
return "", false
}
// PathExpressionStr returns the string representation of path expression value, with a boolean set to false if this is not a path expression.
func PathExpressionStr(node Node) (string, bool) {
if path, ok := node.(*PathExpression); ok {
result := path.Original
// "[foo bar]"" => "foo bar"
if (len(result) >= 2) && (result[0] == '[') && (result[len(result)-1] == ']') {
result = result[1 : len(result)-1]
}
return result, true
}
return "", false
}
// LiteralStr returns the string representation of literal value, with a boolean set to false if this is not a literal.
func LiteralStr(node Node) (string, bool) {
if lit, ok := node.(*StringLiteral); ok {
return lit.Value, true
}
if lit, ok := node.(*BooleanLiteral); ok {
return lit.Canonical(), true
}
if lit, ok := node.(*NumberLiteral); ok {
return lit.Canonical(), true
}
return "", false
}
//
// SubExpression
//
// SubExpression represents a subexpression node.
type SubExpression struct {
NodeType
Loc
Expression *Expression
}
// NewSubExpression instanciates a new subexpression node.
func NewSubExpression(pos int, line int) *SubExpression {
return &SubExpression{
NodeType: NodeSubExpression,
Loc: Loc{pos, line},
}
}
// String returns a string representation of receiver that can be used for debugging.
func (node *SubExpression) String() string {
return fmt.Sprintf("Sexp{Path:%s, Pos:%d}", node.Expression.Path, node.Loc.Pos)
}
// Accept is the receiver entry point for visitors.
func (node *SubExpression) Accept(visitor Visitor) interface{} {
return visitor.VisitSubExpression(node)
}
//
// Path Expression
//
// PathExpression represents a path expression node.
type PathExpression struct {
NodeType
Loc
Original string
Depth int
Parts []string
Data bool
Scoped bool
}
// NewPathExpression instanciates a new path expression node.
func NewPathExpression(pos int, line int, data bool) *PathExpression {
result := &PathExpression{
NodeType: NodePath,
Loc: Loc{pos, line},
Data: data,
}
if data {
result.Original = "@"
}
return result
}
// String returns a string representation of receiver that can be used for debugging.
func (node *PathExpression) String() string {
return fmt.Sprintf("Path{Original:'%s', Pos:%d}", node.Original, node.Loc.Pos)
}
// Accept is the receiver entry point for visitors.
func (node *PathExpression) Accept(visitor Visitor) interface{} {
return visitor.VisitPath(node)
}
// Part adds path part.
func (node *PathExpression) Part(part string) {
node.Original += part
switch part {
case "..":
node.Depth++
node.Scoped = true
case ".", "this":
node.Scoped = true
default:
node.Parts = append(node.Parts, part)
}
}
// Sep adds path separator.
func (node *PathExpression) Sep(separator string) {
node.Original += separator
}
// IsDataRoot returns true if path expression is @root.
func (node *PathExpression) IsDataRoot() bool {
return node.Data && (node.Parts[0] == "root")
}
//
// String Literal
//
// StringLiteral represents a string node.
type StringLiteral struct {
NodeType
Loc
Value string
}
// NewStringLiteral instanciates a new string node.
func NewStringLiteral(pos int, line int, val string) *StringLiteral {
return &StringLiteral{
NodeType: NodeString,
Loc: Loc{pos, line},
Value: val,
}
}
// String returns a string representation of receiver that can be used for debugging.
func (node *StringLiteral) String() string {
return fmt.Sprintf("String{Value:'%s', Pos:%d}", node.Value, node.Loc.Pos)
}
// Accept is the receiver entry point for visitors.
func (node *StringLiteral) Accept(visitor Visitor) interface{} {
return visitor.VisitString(node)
}
//
// Boolean Literal
//
// BooleanLiteral represents a boolean node.
type BooleanLiteral struct {
NodeType
Loc
Value bool
Original string
}
// NewBooleanLiteral instanciates a new boolean node.
func NewBooleanLiteral(pos int, line int, val bool, original string) *BooleanLiteral {
return &BooleanLiteral{
NodeType: NodeBoolean,
Loc: Loc{pos, line},
Value: val,
Original: original,
}
}
// String returns a string representation of receiver that can be used for debugging.
func (node *BooleanLiteral) String() string {
return fmt.Sprintf("Boolean{Value:%s, Pos:%d}", node.Canonical(), node.Loc.Pos)
}
// Accept is the receiver entry point for visitors.
func (node *BooleanLiteral) Accept(visitor Visitor) interface{} {
return visitor.VisitBoolean(node)
}
// Canonical returns the canonical form of boolean node as a string (ie. "true" | "false").
func (node *BooleanLiteral) Canonical() string {
if node.Value {
return "true"
}
return "false"
}
//
// Number Literal
//
// NumberLiteral represents a number node.
type NumberLiteral struct {
NodeType
Loc
Value float64
IsInt bool
Original string
}
// NewNumberLiteral instanciates a new number node.
func NewNumberLiteral(pos int, line int, val float64, isInt bool, original string) *NumberLiteral {
return &NumberLiteral{
NodeType: NodeNumber,
Loc: Loc{pos, line},
Value: val,
IsInt: isInt,
Original: original,
}
}
// String returns a string representation of receiver that can be used for debugging.
func (node *NumberLiteral) String() string {
return fmt.Sprintf("Number{Value:%s, Pos:%d}", node.Canonical(), node.Loc.Pos)
}
// Accept is the receiver entry point for visitors.
func (node *NumberLiteral) Accept(visitor Visitor) interface{} {
return visitor.VisitNumber(node)
}
// Canonical returns the canonical form of number node as a string (eg: "12", "-1.51").
func (node *NumberLiteral) Canonical() string {
prec := -1
if node.IsInt {
prec = 0
}
return strconv.FormatFloat(node.Value, 'f', prec, 64)
}
// Number returns an integer or a float.
func (node *NumberLiteral) Number() interface{} {
if node.IsInt {
return int(node.Value)
}
return node.Value
}
//
// Hash
//
// Hash represents a hash node.
type Hash struct {
NodeType
Loc
Pairs []*HashPair
}
// NewHash instanciates a new hash node.
func NewHash(pos int, line int) *Hash {
return &Hash{
NodeType: NodeHash,
Loc: Loc{pos, line},
}
}
// String returns a string representation of receiver that can be used for debugging.
func (node *Hash) String() string {
result := fmt.Sprintf("Hash{[%d", node.Loc.Pos)
for i, p := range node.Pairs {
if i > 0 {
result += ", "
}
result += p.String()
}
return result + fmt.Sprintf("], Pos:%d}", node.Loc.Pos)
}
// Accept is the receiver entry point for visitors.
func (node *Hash) Accept(visitor Visitor) interface{} {
return visitor.VisitHash(node)
}
//
// HashPair
//
// HashPair represents a hash pair node.
type HashPair struct {
NodeType
Loc
Key string
Val Node // Expression
}
// NewHashPair instanciates a new hash pair node.
func NewHashPair(pos int, line int) *HashPair {
return &HashPair{
NodeType: NodeHashPair,
Loc: Loc{pos, line},
}
}
// String returns a string representation of receiver that can be used for debugging.
func (node *HashPair) String() string {
return node.Key + "=" + node.Val.String()
}
// Accept is the receiver entry point for visitors.
func (node *HashPair) Accept(visitor Visitor) interface{} {
return visitor.VisitHashPair(node)
}

279
vendor/github.com/aymerick/raymond/ast/print.go generated vendored Normal file
View file

@ -0,0 +1,279 @@
package ast
import (
"fmt"
"strings"
)
// printVisitor implements the Visitor interface to print a AST.
type printVisitor struct {
buf string
depth int
original bool
inBlock bool
}
func newPrintVisitor() *printVisitor {
return &printVisitor{}
}
// Print returns a string representation of given AST, that can be used for debugging purpose.
func Print(node Node) string {
visitor := newPrintVisitor()
node.Accept(visitor)
return visitor.output()
}
func (v *printVisitor) output() string {
return v.buf
}
func (v *printVisitor) indent() {
for i := 0; i < v.depth; {
v.buf += " "
i++
}
}
func (v *printVisitor) str(val string) {
v.buf += val
}
func (v *printVisitor) nl() {
v.str("\n")
}
func (v *printVisitor) line(val string) {
v.indent()
v.str(val)
v.nl()
}
//
// Visitor interface
//
// Statements
// VisitProgram implements corresponding Visitor interface method
func (v *printVisitor) VisitProgram(node *Program) interface{} {
if len(node.BlockParams) > 0 {
v.line("BLOCK PARAMS: [ " + strings.Join(node.BlockParams, " ") + " ]")
}
for _, n := range node.Body {
n.Accept(v)
}
return nil
}
// VisitMustache implements corresponding Visitor interface method
func (v *printVisitor) VisitMustache(node *MustacheStatement) interface{} {
v.indent()
v.str("{{ ")
node.Expression.Accept(v)
v.str(" }}")
v.nl()
return nil
}
// VisitBlock implements corresponding Visitor interface method
func (v *printVisitor) VisitBlock(node *BlockStatement) interface{} {
v.inBlock = true
v.line("BLOCK:")
v.depth++
node.Expression.Accept(v)
if node.Program != nil {
v.line("PROGRAM:")
v.depth++
node.Program.Accept(v)
v.depth--
}
if node.Inverse != nil {
// if node.Program != nil {
// v.depth++
// }
v.line("{{^}}")
v.depth++
node.Inverse.Accept(v)
v.depth--
// if node.Program != nil {
// v.depth--
// }
}
v.inBlock = false
return nil
}
// VisitPartial implements corresponding Visitor interface method
func (v *printVisitor) VisitPartial(node *PartialStatement) interface{} {
v.indent()
v.str("{{> PARTIAL:")
v.original = true
node.Name.Accept(v)
v.original = false
if len(node.Params) > 0 {
v.str(" ")
node.Params[0].Accept(v)
}
// hash
if node.Hash != nil {
v.str(" ")
node.Hash.Accept(v)
}
v.str(" }}")
v.nl()
return nil
}
// VisitContent implements corresponding Visitor interface method
func (v *printVisitor) VisitContent(node *ContentStatement) interface{} {
v.line("CONTENT[ '" + node.Value + "' ]")
return nil
}
// VisitComment implements corresponding Visitor interface method
func (v *printVisitor) VisitComment(node *CommentStatement) interface{} {
v.line("{{! '" + node.Value + "' }}")
return nil
}
// Expressions
// VisitExpression implements corresponding Visitor interface method
func (v *printVisitor) VisitExpression(node *Expression) interface{} {
if v.inBlock {
v.indent()
}
// path
node.Path.Accept(v)
// params
v.str(" [")
for i, n := range node.Params {
if i > 0 {
v.str(", ")
}
n.Accept(v)
}
v.str("]")
// hash
if node.Hash != nil {
v.str(" ")
node.Hash.Accept(v)
}
if v.inBlock {
v.nl()
}
return nil
}
// VisitSubExpression implements corresponding Visitor interface method
func (v *printVisitor) VisitSubExpression(node *SubExpression) interface{} {
node.Expression.Accept(v)
return nil
}
// VisitPath implements corresponding Visitor interface method
func (v *printVisitor) VisitPath(node *PathExpression) interface{} {
if v.original {
v.str(node.Original)
} else {
path := strings.Join(node.Parts, "/")
result := ""
if node.Data {
result += "@"
}
v.str(result + "PATH:" + path)
}
return nil
}
// Literals
// VisitString implements corresponding Visitor interface method
func (v *printVisitor) VisitString(node *StringLiteral) interface{} {
if v.original {
v.str(node.Value)
} else {
v.str("\"" + node.Value + "\"")
}
return nil
}
// VisitBoolean implements corresponding Visitor interface method
func (v *printVisitor) VisitBoolean(node *BooleanLiteral) interface{} {
if v.original {
v.str(node.Original)
} else {
v.str(fmt.Sprintf("BOOLEAN{%s}", node.Canonical()))
}
return nil
}
// VisitNumber implements corresponding Visitor interface method
func (v *printVisitor) VisitNumber(node *NumberLiteral) interface{} {
if v.original {
v.str(node.Original)
} else {
v.str(fmt.Sprintf("NUMBER{%s}", node.Canonical()))
}
return nil
}
// Miscellaneous
// VisitHash implements corresponding Visitor interface method
func (v *printVisitor) VisitHash(node *Hash) interface{} {
v.str("HASH{")
for i, p := range node.Pairs {
if i > 0 {
v.str(", ")
}
p.Accept(v)
}
v.str("}")
return nil
}
// VisitHashPair implements corresponding Visitor interface method
func (v *printVisitor) VisitHashPair(node *HashPair) interface{} {
v.str(node.Key + "=")
node.Val.Accept(v)
return nil
}

95
vendor/github.com/aymerick/raymond/data_frame.go generated vendored Normal file
View file

@ -0,0 +1,95 @@
package raymond
import "reflect"
// DataFrame represents a private data frame.
//
// Cf. private variables documentation at: http://handlebarsjs.com/block_helpers.html
type DataFrame struct {
parent *DataFrame
data map[string]interface{}
}
// NewDataFrame instanciates a new private data frame.
func NewDataFrame() *DataFrame {
return &DataFrame{
data: make(map[string]interface{}),
}
}
// Copy instanciates a new private data frame with receiver as parent.
func (p *DataFrame) Copy() *DataFrame {
result := NewDataFrame()
for k, v := range p.data {
result.data[k] = v
}
result.parent = p
return result
}
// newIterDataFrame instanciates a new private data frame with receiver as parent and with iteration data set (@index, @key, @first, @last)
func (p *DataFrame) newIterDataFrame(length int, i int, key interface{}) *DataFrame {
result := p.Copy()
result.Set("index", i)
result.Set("key", key)
result.Set("first", i == 0)
result.Set("last", i == length-1)
return result
}
// Set sets a data value.
func (p *DataFrame) Set(key string, val interface{}) {
p.data[key] = val
}
// Get gets a data value.
func (p *DataFrame) Get(key string) interface{} {
return p.find([]string{key})
}
// find gets a deep data value
//
// @todo This is NOT consistent with the way we resolve data in template (cf. `evalDataPathExpression()`) ! FIX THAT !
func (p *DataFrame) find(parts []string) interface{} {
data := p.data
for i, part := range parts {
val := data[part]
if val == nil {
return nil
}
if i == len(parts)-1 {
// found
return val
}
valValue := reflect.ValueOf(val)
if valValue.Kind() != reflect.Map {
// not found
return nil
}
// continue
data = mapStringInterface(valValue)
}
// not found
return nil
}
// mapStringInterface converts any `map` to `map[string]interface{}`
func mapStringInterface(value reflect.Value) map[string]interface{} {
result := make(map[string]interface{})
for _, key := range value.MapKeys() {
result[strValue(key)] = value.MapIndex(key).Interface()
}
return result
}

65
vendor/github.com/aymerick/raymond/escape.go generated vendored Normal file
View file

@ -0,0 +1,65 @@
package raymond
import (
"bytes"
"strings"
)
//
// That whole file is borrowed from https://github.com/golang/go/tree/master/src/html/escape.go
//
// With changes:
// &#39 => &apos;
// &#34 => &quot;
//
// To stay in sync with JS implementation, and make mustache tests pass.
//
type writer interface {
WriteString(string) (int, error)
}
const escapedChars = `&'<>"`
func escape(w writer, s string) error {
i := strings.IndexAny(s, escapedChars)
for i != -1 {
if _, err := w.WriteString(s[:i]); err != nil {
return err
}
var esc string
switch s[i] {
case '&':
esc = "&amp;"
case '\'':
esc = "&apos;"
case '<':
esc = "&lt;"
case '>':
esc = "&gt;"
case '"':
esc = "&quot;"
default:
panic("unrecognized escape character")
}
s = s[i+1:]
if _, err := w.WriteString(esc); err != nil {
return err
}
i = strings.IndexAny(s, escapedChars)
}
_, err := w.WriteString(s)
return err
}
// Escape escapes special HTML characters.
//
// It can be used by helpers that return a SafeString and that need to escape some content by themselves.
func Escape(s string) string {
if strings.IndexAny(s, escapedChars) == -1 {
return s
}
var buf bytes.Buffer
escape(&buf, s)
return buf.String()
}

1005
vendor/github.com/aymerick/raymond/eval.go generated vendored Normal file

File diff suppressed because it is too large Load diff

382
vendor/github.com/aymerick/raymond/helper.go generated vendored Normal file
View file

@ -0,0 +1,382 @@
package raymond
import (
"fmt"
"log"
"reflect"
"sync"
)
// Options represents the options argument provided to helpers and context functions.
type Options struct {
// evaluation visitor
eval *evalVisitor
// params
params []interface{}
hash map[string]interface{}
}
// helpers stores all globally registered helpers
var helpers = make(map[string]reflect.Value)
// protects global helpers
var helpersMutex sync.RWMutex
func init() {
// register builtin helpers
RegisterHelper("if", ifHelper)
RegisterHelper("unless", unlessHelper)
RegisterHelper("with", withHelper)
RegisterHelper("each", eachHelper)
RegisterHelper("log", logHelper)
RegisterHelper("lookup", lookupHelper)
RegisterHelper("equal", equalHelper)
}
// RegisterHelper registers a global helper. That helper will be available to all templates.
func RegisterHelper(name string, helper interface{}) {
helpersMutex.Lock()
defer helpersMutex.Unlock()
if helpers[name] != zero {
panic(fmt.Errorf("Helper already registered: %s", name))
}
val := reflect.ValueOf(helper)
ensureValidHelper(name, val)
helpers[name] = val
}
// RegisterHelpers registers several global helpers. Those helpers will be available to all templates.
func RegisterHelpers(helpers map[string]interface{}) {
for name, helper := range helpers {
RegisterHelper(name, helper)
}
}
// ensureValidHelper panics if given helper is not valid
func ensureValidHelper(name string, funcValue reflect.Value) {
if funcValue.Kind() != reflect.Func {
panic(fmt.Errorf("Helper must be a function: %s", name))
}
funcType := funcValue.Type()
if funcType.NumOut() != 1 {
panic(fmt.Errorf("Helper function must return a string or a SafeString: %s", name))
}
// @todo Check if first returned value is a string, SafeString or interface{} ?
}
// findHelper finds a globally registered helper
func findHelper(name string) reflect.Value {
helpersMutex.RLock()
defer helpersMutex.RUnlock()
return helpers[name]
}
// newOptions instanciates a new Options
func newOptions(eval *evalVisitor, params []interface{}, hash map[string]interface{}) *Options {
return &Options{
eval: eval,
params: params,
hash: hash,
}
}
// newEmptyOptions instanciates a new empty Options
func newEmptyOptions(eval *evalVisitor) *Options {
return &Options{
eval: eval,
hash: make(map[string]interface{}),
}
}
//
// Context Values
//
// Value returns field value from current context.
func (options *Options) Value(name string) interface{} {
value := options.eval.evalField(options.eval.curCtx(), name, false)
if !value.IsValid() {
return nil
}
return value.Interface()
}
// ValueStr returns string representation of field value from current context.
func (options *Options) ValueStr(name string) string {
return Str(options.Value(name))
}
// Ctx returns current evaluation context.
func (options *Options) Ctx() interface{} {
return options.eval.curCtx().Interface()
}
//
// Hash Arguments
//
// HashProp returns hash property.
func (options *Options) HashProp(name string) interface{} {
return options.hash[name]
}
// HashStr returns string representation of hash property.
func (options *Options) HashStr(name string) string {
return Str(options.hash[name])
}
// Hash returns entire hash.
func (options *Options) Hash() map[string]interface{} {
return options.hash
}
//
// Parameters
//
// Param returns parameter at given position.
func (options *Options) Param(pos int) interface{} {
if len(options.params) > pos {
return options.params[pos]
}
return nil
}
// ParamStr returns string representation of parameter at given position.
func (options *Options) ParamStr(pos int) string {
return Str(options.Param(pos))
}
// Params returns all parameters.
func (options *Options) Params() []interface{} {
return options.params
}
//
// Private data
//
// Data returns private data value.
func (options *Options) Data(name string) interface{} {
return options.eval.dataFrame.Get(name)
}
// DataStr returns string representation of private data value.
func (options *Options) DataStr(name string) string {
return Str(options.eval.dataFrame.Get(name))
}
// DataFrame returns current private data frame.
func (options *Options) DataFrame() *DataFrame {
return options.eval.dataFrame
}
// NewDataFrame instanciates a new data frame that is a copy of current evaluation data frame.
//
// Parent of returned data frame is set to current evaluation data frame.
func (options *Options) NewDataFrame() *DataFrame {
return options.eval.dataFrame.Copy()
}
// newIterDataFrame instanciates a new data frame and set iteration specific vars
func (options *Options) newIterDataFrame(length int, i int, key interface{}) *DataFrame {
return options.eval.dataFrame.newIterDataFrame(length, i, key)
}
//
// Evaluation
//
// evalBlock evaluates block with given context, private data and iteration key
func (options *Options) evalBlock(ctx interface{}, data *DataFrame, key interface{}) string {
result := ""
if block := options.eval.curBlock(); (block != nil) && (block.Program != nil) {
result = options.eval.evalProgram(block.Program, ctx, data, key)
}
return result
}
// Fn evaluates block with current evaluation context.
func (options *Options) Fn() string {
return options.evalBlock(nil, nil, nil)
}
// FnCtxData evaluates block with given context and private data frame.
func (options *Options) FnCtxData(ctx interface{}, data *DataFrame) string {
return options.evalBlock(ctx, data, nil)
}
// FnWith evaluates block with given context.
func (options *Options) FnWith(ctx interface{}) string {
return options.evalBlock(ctx, nil, nil)
}
// FnData evaluates block with given private data frame.
func (options *Options) FnData(data *DataFrame) string {
return options.evalBlock(nil, data, nil)
}
// Inverse evaluates "else block".
func (options *Options) Inverse() string {
result := ""
if block := options.eval.curBlock(); (block != nil) && (block.Inverse != nil) {
result, _ = block.Inverse.Accept(options.eval).(string)
}
return result
}
// Eval evaluates field for given context.
func (options *Options) Eval(ctx interface{}, field string) interface{} {
if ctx == nil {
return nil
}
if field == "" {
return nil
}
val := options.eval.evalField(reflect.ValueOf(ctx), field, false)
if !val.IsValid() {
return nil
}
return val.Interface()
}
//
// Misc
//
// isIncludableZero returns true if 'includeZero' option is set and first param is the number 0
func (options *Options) isIncludableZero() bool {
b, ok := options.HashProp("includeZero").(bool)
if ok && b {
nb, ok := options.Param(0).(int)
if ok && nb == 0 {
return true
}
}
return false
}
//
// Builtin helpers
//
// #if block helper
func ifHelper(conditional interface{}, options *Options) interface{} {
if options.isIncludableZero() || IsTrue(conditional) {
return options.Fn()
}
return options.Inverse()
}
// #unless block helper
func unlessHelper(conditional interface{}, options *Options) interface{} {
if options.isIncludableZero() || IsTrue(conditional) {
return options.Inverse()
}
return options.Fn()
}
// #with block helper
func withHelper(context interface{}, options *Options) interface{} {
if IsTrue(context) {
return options.FnWith(context)
}
return options.Inverse()
}
// #each block helper
func eachHelper(context interface{}, options *Options) interface{} {
if !IsTrue(context) {
return options.Inverse()
}
result := ""
val := reflect.ValueOf(context)
switch val.Kind() {
case reflect.Array, reflect.Slice:
for i := 0; i < val.Len(); i++ {
// computes private data
data := options.newIterDataFrame(val.Len(), i, nil)
// evaluates block
result += options.evalBlock(val.Index(i).Interface(), data, i)
}
case reflect.Map:
// note: a go hash is not ordered, so result may vary, this behaviour differs from the JS implementation
keys := val.MapKeys()
for i := 0; i < len(keys); i++ {
key := keys[i].Interface()
ctx := val.MapIndex(keys[i]).Interface()
// computes private data
data := options.newIterDataFrame(len(keys), i, key)
// evaluates block
result += options.evalBlock(ctx, data, key)
}
case reflect.Struct:
var exportedFields []int
// collect exported fields only
for i := 0; i < val.NumField(); i++ {
if tField := val.Type().Field(i); tField.PkgPath == "" {
exportedFields = append(exportedFields, i)
}
}
for i, fieldIndex := range exportedFields {
key := val.Type().Field(fieldIndex).Name
ctx := val.Field(fieldIndex).Interface()
// computes private data
data := options.newIterDataFrame(len(exportedFields), i, key)
// evaluates block
result += options.evalBlock(ctx, data, key)
}
}
return result
}
// #log helper
func logHelper(message string) interface{} {
log.Print(message)
return ""
}
// #lookup helper
func lookupHelper(obj interface{}, field string, options *Options) interface{} {
return Str(options.Eval(obj, field))
}
// #equal helper
// Ref: https://github.com/aymerick/raymond/issues/7
func equalHelper(a interface{}, b interface{}, options *Options) interface{} {
if Str(a) == Str(b) {
return options.Fn()
}
return ""
}

639
vendor/github.com/aymerick/raymond/lexer/lexer.go generated vendored Normal file
View file

@ -0,0 +1,639 @@
// Package lexer provides a handlebars tokenizer.
package lexer
import (
"fmt"
"regexp"
"strings"
"unicode"
"unicode/utf8"
)
// References:
// - https://github.com/wycats/handlebars.js/blob/master/src/handlebars.l
// - https://github.com/golang/go/blob/master/src/text/template/parse/lex.go
const (
// Mustaches detection
escapedEscapedOpenMustache = "\\\\{{"
escapedOpenMustache = "\\{{"
openMustache = "{{"
closeMustache = "}}"
closeStripMustache = "~}}"
closeUnescapedStripMustache = "}~}}"
)
const eof = -1
// lexFunc represents a function that returns the next lexer function.
type lexFunc func(*Lexer) lexFunc
// Lexer is a lexical analyzer.
type Lexer struct {
input string // input to scan
name string // lexer name, used for testing purpose
tokens chan Token // channel of scanned tokens
nextFunc lexFunc // the next function to execute
pos int // current byte position in input string
line int // current line position in input string
width int // size of last rune scanned from input string
start int // start position of the token we are scanning
// the shameful contextual properties needed because `nextFunc` is not enough
closeComment *regexp.Regexp // regexp to scan close of current comment
rawBlock bool // are we parsing a raw block content ?
}
var (
lookheadChars = `[\s` + regexp.QuoteMeta("=~}/)|") + `]`
literalLookheadChars = `[\s` + regexp.QuoteMeta("~})") + `]`
// characters not allowed in an identifier
unallowedIDChars = " \n\t!\"#%&'()*+,./;<=>@[\\]^`{|}~"
// regular expressions
rID = regexp.MustCompile(`^[^` + regexp.QuoteMeta(unallowedIDChars) + `]+`)
rDotID = regexp.MustCompile(`^\.` + lookheadChars)
rTrue = regexp.MustCompile(`^true` + literalLookheadChars)
rFalse = regexp.MustCompile(`^false` + literalLookheadChars)
rOpenRaw = regexp.MustCompile(`^\{\{\{\{`)
rCloseRaw = regexp.MustCompile(`^\}\}\}\}`)
rOpenEndRaw = regexp.MustCompile(`^\{\{\{\{/`)
rOpenEndRawLookAhead = regexp.MustCompile(`\{\{\{\{/`)
rOpenUnescaped = regexp.MustCompile(`^\{\{~?\{`)
rCloseUnescaped = regexp.MustCompile(`^\}~?\}\}`)
rOpenBlock = regexp.MustCompile(`^\{\{~?#`)
rOpenEndBlock = regexp.MustCompile(`^\{\{~?/`)
rOpenPartial = regexp.MustCompile(`^\{\{~?>`)
// {{^}} or {{else}}
rInverse = regexp.MustCompile(`^(\{\{~?\^\s*~?\}\}|\{\{~?\s*else\s*~?\}\})`)
rOpenInverse = regexp.MustCompile(`^\{\{~?\^`)
rOpenInverseChain = regexp.MustCompile(`^\{\{~?\s*else`)
// {{ or {{&
rOpen = regexp.MustCompile(`^\{\{~?&?`)
rClose = regexp.MustCompile(`^~?\}\}`)
rOpenBlockParams = regexp.MustCompile(`^as\s+\|`)
// {{!-- ... --}}
rOpenCommentDash = regexp.MustCompile(`^\{\{~?!--\s*`)
rCloseCommentDash = regexp.MustCompile(`^\s*--~?\}\}`)
// {{! ... }}
rOpenComment = regexp.MustCompile(`^\{\{~?!\s*`)
rCloseComment = regexp.MustCompile(`^\s*~?\}\}`)
)
// Scan scans given input.
//
// Tokens can then be fetched sequentially thanks to NextToken() function on returned lexer.
func Scan(input string) *Lexer {
return scanWithName(input, "")
}
// scanWithName scans given input, with a name used for testing
//
// Tokens can then be fetched sequentially thanks to NextToken() function on returned lexer.
func scanWithName(input string, name string) *Lexer {
result := &Lexer{
input: input,
name: name,
tokens: make(chan Token),
line: 1,
}
go result.run()
return result
}
// Collect scans and collect all tokens.
//
// This should be used for debugging purpose only. You should use Scan() and lexer.NextToken() functions instead.
func Collect(input string) []Token {
var result []Token
l := Scan(input)
for {
token := l.NextToken()
result = append(result, token)
if token.Kind == TokenEOF || token.Kind == TokenError {
break
}
}
return result
}
// NextToken returns the next scanned token.
func (l *Lexer) NextToken() Token {
result := <-l.tokens
return result
}
// run starts lexical analysis
func (l *Lexer) run() {
for l.nextFunc = lexContent; l.nextFunc != nil; {
l.nextFunc = l.nextFunc(l)
}
}
// next returns next character from input, or eof of there is nothing left to scan
func (l *Lexer) next() rune {
if l.pos >= len(l.input) {
l.width = 0
return eof
}
r, w := utf8.DecodeRuneInString(l.input[l.pos:])
l.width = w
l.pos += l.width
return r
}
func (l *Lexer) produce(kind TokenKind, val string) {
l.tokens <- Token{kind, val, l.start, l.line}
// scanning a new token
l.start = l.pos
// update line number
l.line += strings.Count(val, "\n")
}
// emit emits a new scanned token
func (l *Lexer) emit(kind TokenKind) {
l.produce(kind, l.input[l.start:l.pos])
}
// emitContent emits scanned content
func (l *Lexer) emitContent() {
if l.pos > l.start {
l.emit(TokenContent)
}
}
// emitString emits a scanned string
func (l *Lexer) emitString(delimiter rune) {
str := l.input[l.start:l.pos]
// replace escaped delimiters
str = strings.Replace(str, "\\"+string(delimiter), string(delimiter), -1)
l.produce(TokenString, str)
}
// peek returns but does not consume the next character in the input
func (l *Lexer) peek() rune {
r := l.next()
l.backup()
return r
}
// backup steps back one character
//
// WARNING: Can only be called once per call of next
func (l *Lexer) backup() {
l.pos -= l.width
}
// ignoreskips all characters that have been scanned up to current position
func (l *Lexer) ignore() {
l.start = l.pos
}
// accept scans the next character if it is included in given string
func (l *Lexer) accept(valid string) bool {
if strings.IndexRune(valid, l.next()) >= 0 {
return true
}
l.backup()
return false
}
// acceptRun scans all following characters that are part of given string
func (l *Lexer) acceptRun(valid string) {
for strings.IndexRune(valid, l.next()) >= 0 {
}
l.backup()
}
// errorf emits an error token
func (l *Lexer) errorf(format string, args ...interface{}) lexFunc {
l.tokens <- Token{TokenError, fmt.Sprintf(format, args...), l.start, l.line}
return nil
}
// isString returns true if content at current scanning position starts with given string
func (l *Lexer) isString(str string) bool {
return strings.HasPrefix(l.input[l.pos:], str)
}
// findRegexp returns the first string from current scanning position that matches given regular expression
func (l *Lexer) findRegexp(r *regexp.Regexp) string {
return r.FindString(l.input[l.pos:])
}
// indexRegexp returns the index of the first string from current scanning position that matches given regular expression
//
// It returns -1 if not found
func (l *Lexer) indexRegexp(r *regexp.Regexp) int {
loc := r.FindStringIndex(l.input[l.pos:])
if loc == nil {
return -1
}
return loc[0]
}
// lexContent scans content (ie: not between mustaches)
func lexContent(l *Lexer) lexFunc {
var next lexFunc
if l.rawBlock {
if i := l.indexRegexp(rOpenEndRawLookAhead); i != -1 {
// {{{{/
l.rawBlock = false
l.pos += i
next = lexOpenMustache
} else {
return l.errorf("Unclosed raw block")
}
} else if l.isString(escapedEscapedOpenMustache) {
// \\{{
// emit content with only one escaped escape
l.next()
l.emitContent()
// ignore second escaped escape
l.next()
l.ignore()
next = lexContent
} else if l.isString(escapedOpenMustache) {
// \{{
next = lexEscapedOpenMustache
} else if str := l.findRegexp(rOpenCommentDash); str != "" {
// {{!--
l.closeComment = rCloseCommentDash
next = lexComment
} else if str := l.findRegexp(rOpenComment); str != "" {
// {{!
l.closeComment = rCloseComment
next = lexComment
} else if l.isString(openMustache) {
// {{
next = lexOpenMustache
}
if next != nil {
// emit scanned content
l.emitContent()
// scan next token
return next
}
// scan next rune
if l.next() == eof {
// emit scanned content
l.emitContent()
// this is over
l.emit(TokenEOF)
return nil
}
// continue content scanning
return lexContent
}
// lexEscapedOpenMustache scans \{{
func lexEscapedOpenMustache(l *Lexer) lexFunc {
// ignore escape character
l.next()
l.ignore()
// scan mustaches
for l.peek() == '{' {
l.next()
}
return lexContent
}
// lexOpenMustache scans {{
func lexOpenMustache(l *Lexer) lexFunc {
var str string
var tok TokenKind
nextFunc := lexExpression
if str = l.findRegexp(rOpenEndRaw); str != "" {
tok = TokenOpenEndRawBlock
} else if str = l.findRegexp(rOpenRaw); str != "" {
tok = TokenOpenRawBlock
l.rawBlock = true
} else if str = l.findRegexp(rOpenUnescaped); str != "" {
tok = TokenOpenUnescaped
} else if str = l.findRegexp(rOpenBlock); str != "" {
tok = TokenOpenBlock
} else if str = l.findRegexp(rOpenEndBlock); str != "" {
tok = TokenOpenEndBlock
} else if str = l.findRegexp(rOpenPartial); str != "" {
tok = TokenOpenPartial
} else if str = l.findRegexp(rInverse); str != "" {
tok = TokenInverse
nextFunc = lexContent
} else if str = l.findRegexp(rOpenInverse); str != "" {
tok = TokenOpenInverse
} else if str = l.findRegexp(rOpenInverseChain); str != "" {
tok = TokenOpenInverseChain
} else if str = l.findRegexp(rOpen); str != "" {
tok = TokenOpen
} else {
// this is rotten
panic("Current pos MUST be an opening mustache")
}
l.pos += len(str)
l.emit(tok)
return nextFunc
}
// lexCloseMustache scans }} or ~}}
func lexCloseMustache(l *Lexer) lexFunc {
var str string
var tok TokenKind
if str = l.findRegexp(rCloseRaw); str != "" {
// }}}}
tok = TokenCloseRawBlock
} else if str = l.findRegexp(rCloseUnescaped); str != "" {
// }}}
tok = TokenCloseUnescaped
} else if str = l.findRegexp(rClose); str != "" {
// }}
tok = TokenClose
} else {
// this is rotten
panic("Current pos MUST be a closing mustache")
}
l.pos += len(str)
l.emit(tok)
return lexContent
}
// lexExpression scans inside mustaches
func lexExpression(l *Lexer) lexFunc {
// search close mustache delimiter
if l.isString(closeMustache) || l.isString(closeStripMustache) || l.isString(closeUnescapedStripMustache) {
return lexCloseMustache
}
// search some patterns before advancing scanning position
// "as |"
if str := l.findRegexp(rOpenBlockParams); str != "" {
l.pos += len(str)
l.emit(TokenOpenBlockParams)
return lexExpression
}
// ..
if l.isString("..") {
l.pos += len("..")
l.emit(TokenID)
return lexExpression
}
// .
if str := l.findRegexp(rDotID); str != "" {
l.pos += len(".")
l.emit(TokenID)
return lexExpression
}
// true
if str := l.findRegexp(rTrue); str != "" {
l.pos += len("true")
l.emit(TokenBoolean)
return lexExpression
}
// false
if str := l.findRegexp(rFalse); str != "" {
l.pos += len("false")
l.emit(TokenBoolean)
return lexExpression
}
// let's scan next character
switch r := l.next(); {
case r == eof:
return l.errorf("Unclosed expression")
case isIgnorable(r):
return lexIgnorable
case r == '(':
l.emit(TokenOpenSexpr)
case r == ')':
l.emit(TokenCloseSexpr)
case r == '=':
l.emit(TokenEquals)
case r == '@':
l.emit(TokenData)
case r == '"' || r == '\'':
l.backup()
return lexString
case r == '/' || r == '.':
l.emit(TokenSep)
case r == '|':
l.emit(TokenCloseBlockParams)
case r == '+' || r == '-' || (r >= '0' && r <= '9'):
l.backup()
return lexNumber
case r == '[':
return lexPathLiteral
case strings.IndexRune(unallowedIDChars, r) < 0:
l.backup()
return lexIdentifier
default:
return l.errorf("Unexpected character in expression: '%c'", r)
}
return lexExpression
}
// lexComment scans {{!-- or {{!
func lexComment(l *Lexer) lexFunc {
if str := l.findRegexp(l.closeComment); str != "" {
l.pos += len(str)
l.emit(TokenComment)
return lexContent
}
if r := l.next(); r == eof {
return l.errorf("Unclosed comment")
}
return lexComment
}
// lexIgnorable scans all following ignorable characters
func lexIgnorable(l *Lexer) lexFunc {
for isIgnorable(l.peek()) {
l.next()
}
l.ignore()
return lexExpression
}
// lexString scans a string
func lexString(l *Lexer) lexFunc {
// get string delimiter
delim := l.next()
var prev rune
// ignore delimiter
l.ignore()
for {
r := l.next()
if r == eof || r == '\n' {
return l.errorf("Unterminated string")
}
if (r == delim) && (prev != '\\') {
break
}
prev = r
}
// remove end delimiter
l.backup()
// emit string
l.emitString(delim)
// skip end delimiter
l.next()
l.ignore()
return lexExpression
}
// lexNumber scans a number: decimal, octal, hex, float, or imaginary. This
// isn't a perfect number scanner - for instance it accepts "." and "0x0.2"
// and "089" - but when it's wrong the input is invalid and the parser (via
// strconv) will notice.
//
// NOTE: borrowed from https://github.com/golang/go/tree/master/src/text/template/parse/lex.go
func lexNumber(l *Lexer) lexFunc {
if !l.scanNumber() {
return l.errorf("bad number syntax: %q", l.input[l.start:l.pos])
}
if sign := l.peek(); sign == '+' || sign == '-' {
// Complex: 1+2i. No spaces, must end in 'i'.
if !l.scanNumber() || l.input[l.pos-1] != 'i' {
return l.errorf("bad number syntax: %q", l.input[l.start:l.pos])
}
l.emit(TokenNumber)
} else {
l.emit(TokenNumber)
}
return lexExpression
}
// scanNumber scans a number
//
// NOTE: borrowed from https://github.com/golang/go/tree/master/src/text/template/parse/lex.go
func (l *Lexer) scanNumber() bool {
// Optional leading sign.
l.accept("+-")
// Is it hex?
digits := "0123456789"
if l.accept("0") && l.accept("xX") {
digits = "0123456789abcdefABCDEF"
}
l.acceptRun(digits)
if l.accept(".") {
l.acceptRun(digits)
}
if l.accept("eE") {
l.accept("+-")
l.acceptRun("0123456789")
}
// Is it imaginary?
l.accept("i")
// Next thing mustn't be alphanumeric.
if isAlphaNumeric(l.peek()) {
l.next()
return false
}
return true
}
// lexIdentifier scans an ID
func lexIdentifier(l *Lexer) lexFunc {
str := l.findRegexp(rID)
if len(str) == 0 {
// this is rotten
panic("Identifier expected")
}
l.pos += len(str)
l.emit(TokenID)
return lexExpression
}
// lexPathLiteral scans an [ID]
func lexPathLiteral(l *Lexer) lexFunc {
for {
r := l.next()
if r == eof || r == '\n' {
return l.errorf("Unterminated path literal")
}
if r == ']' {
break
}
}
l.emit(TokenID)
return lexExpression
}
// isIgnorable returns true if given character is ignorable (ie. whitespace of line feed)
func isIgnorable(r rune) bool {
return r == ' ' || r == '\t' || r == '\n'
}
// isAlphaNumeric reports whether r is an alphabetic, digit, or underscore.
//
// NOTE borrowed from https://github.com/golang/go/tree/master/src/text/template/parse/lex.go
func isAlphaNumeric(r rune) bool {
return r == '_' || unicode.IsLetter(r) || unicode.IsDigit(r)
}

183
vendor/github.com/aymerick/raymond/lexer/token.go generated vendored Normal file
View file

@ -0,0 +1,183 @@
package lexer
import "fmt"
const (
// TokenError represents an error
TokenError TokenKind = iota
// TokenEOF represents an End Of File
TokenEOF
//
// Mustache delimiters
//
// TokenOpen is the OPEN token
TokenOpen
// TokenClose is the CLOSE token
TokenClose
// TokenOpenRawBlock is the OPEN_RAW_BLOCK token
TokenOpenRawBlock
// TokenCloseRawBlock is the CLOSE_RAW_BLOCK token
TokenCloseRawBlock
// TokenOpenEndRawBlock is the END_RAW_BLOCK token
TokenOpenEndRawBlock
// TokenOpenUnescaped is the OPEN_UNESCAPED token
TokenOpenUnescaped
// TokenCloseUnescaped is the CLOSE_UNESCAPED token
TokenCloseUnescaped
// TokenOpenBlock is the OPEN_BLOCK token
TokenOpenBlock
// TokenOpenEndBlock is the OPEN_ENDBLOCK token
TokenOpenEndBlock
// TokenInverse is the INVERSE token
TokenInverse
// TokenOpenInverse is the OPEN_INVERSE token
TokenOpenInverse
// TokenOpenInverseChain is the OPEN_INVERSE_CHAIN token
TokenOpenInverseChain
// TokenOpenPartial is the OPEN_PARTIAL token
TokenOpenPartial
// TokenComment is the COMMENT token
TokenComment
//
// Inside mustaches
//
// TokenOpenSexpr is the OPEN_SEXPR token
TokenOpenSexpr
// TokenCloseSexpr is the CLOSE_SEXPR token
TokenCloseSexpr
// TokenEquals is the EQUALS token
TokenEquals
// TokenData is the DATA token
TokenData
// TokenSep is the SEP token
TokenSep
// TokenOpenBlockParams is the OPEN_BLOCK_PARAMS token
TokenOpenBlockParams
// TokenCloseBlockParams is the CLOSE_BLOCK_PARAMS token
TokenCloseBlockParams
//
// Tokens with content
//
// TokenContent is the CONTENT token
TokenContent
// TokenID is the ID token
TokenID
// TokenString is the STRING token
TokenString
// TokenNumber is the NUMBER token
TokenNumber
// TokenBoolean is the BOOLEAN token
TokenBoolean
)
const (
// Option to generate token position in its string representation
dumpTokenPos = false
// Option to generate values for all token kinds for their string representations
dumpAllTokensVal = true
)
// TokenKind represents a Token type.
type TokenKind int
// Token represents a scanned token.
type Token struct {
Kind TokenKind // Token kind
Val string // Token value
Pos int // Byte position in input string
Line int // Line number in input string
}
// tokenName permits to display token name given token type
var tokenName = map[TokenKind]string{
TokenError: "Error",
TokenEOF: "EOF",
TokenContent: "Content",
TokenComment: "Comment",
TokenOpen: "Open",
TokenClose: "Close",
TokenOpenUnescaped: "OpenUnescaped",
TokenCloseUnescaped: "CloseUnescaped",
TokenOpenBlock: "OpenBlock",
TokenOpenEndBlock: "OpenEndBlock",
TokenOpenRawBlock: "OpenRawBlock",
TokenCloseRawBlock: "CloseRawBlock",
TokenOpenEndRawBlock: "OpenEndRawBlock",
TokenOpenBlockParams: "OpenBlockParams",
TokenCloseBlockParams: "CloseBlockParams",
TokenInverse: "Inverse",
TokenOpenInverse: "OpenInverse",
TokenOpenInverseChain: "OpenInverseChain",
TokenOpenPartial: "OpenPartial",
TokenOpenSexpr: "OpenSexpr",
TokenCloseSexpr: "CloseSexpr",
TokenID: "ID",
TokenEquals: "Equals",
TokenString: "String",
TokenNumber: "Number",
TokenBoolean: "Boolean",
TokenData: "Data",
TokenSep: "Sep",
}
// String returns the token kind string representation for debugging.
func (k TokenKind) String() string {
s := tokenName[k]
if s == "" {
return fmt.Sprintf("Token-%d", int(k))
}
return s
}
// String returns the token string representation for debugging.
func (t Token) String() string {
result := ""
if dumpTokenPos {
result += fmt.Sprintf("%d:", t.Pos)
}
result += fmt.Sprintf("%s", t.Kind)
if (dumpAllTokensVal || (t.Kind >= TokenContent)) && len(t.Val) > 0 {
if len(t.Val) > 100 {
result += fmt.Sprintf("{%.20q...}", t.Val)
} else {
result += fmt.Sprintf("{%q}", t.Val)
}
}
return result
}

846
vendor/github.com/aymerick/raymond/parser/parser.go generated vendored Normal file
View file

@ -0,0 +1,846 @@
// Package parser provides a handlebars syntax analyser. It consumes the tokens provided by the lexer to build an AST.
package parser
import (
"fmt"
"regexp"
"runtime"
"strconv"
"github.com/aymerick/raymond/ast"
"github.com/aymerick/raymond/lexer"
)
// References:
// - https://github.com/wycats/handlebars.js/blob/master/src/handlebars.yy
// - https://github.com/golang/go/blob/master/src/text/template/parse/parse.go
// parser is a syntax analyzer.
type parser struct {
// Lexer
lex *lexer.Lexer
// Root node
root ast.Node
// Tokens parsed but not consumed yet
tokens []*lexer.Token
// All tokens have been retreieved from lexer
lexOver bool
}
var (
rOpenComment = regexp.MustCompile(`^\{\{~?!-?-?`)
rCloseComment = regexp.MustCompile(`-?-?~?\}\}$`)
rOpenAmp = regexp.MustCompile(`^\{\{~?&`)
)
// new instanciates a new parser
func new(input string) *parser {
return &parser{
lex: lexer.Scan(input),
}
}
// Parse analyzes given input and returns the AST root node.
func Parse(input string) (result *ast.Program, err error) {
// recover error
defer errRecover(&err)
parser := new(input)
// parse
result = parser.parseProgram()
// check last token
token := parser.shift()
if token.Kind != lexer.TokenEOF {
// Parsing ended before EOF
errToken(token, "Syntax error")
}
// fix whitespaces
processWhitespaces(result)
// named returned values
return
}
// errRecover recovers parsing panic
func errRecover(errp *error) {
e := recover()
if e != nil {
switch err := e.(type) {
case runtime.Error:
panic(e)
case error:
*errp = err
default:
panic(e)
}
}
}
// errPanic panics
func errPanic(err error, line int) {
panic(fmt.Errorf("Parse error on line %d:\n%s", line, err))
}
// errNode panics with given node infos
func errNode(node ast.Node, msg string) {
errPanic(fmt.Errorf("%s\nNode: %s", msg, node), node.Location().Line)
}
// errNode panics with given Token infos
func errToken(tok *lexer.Token, msg string) {
errPanic(fmt.Errorf("%s\nToken: %s", msg, tok), tok.Line)
}
// errNode panics because of an unexpected Token kind
func errExpected(expect lexer.TokenKind, tok *lexer.Token) {
errPanic(fmt.Errorf("Expecting %s, got: '%s'", expect, tok), tok.Line)
}
// program : statement*
func (p *parser) parseProgram() *ast.Program {
result := ast.NewProgram(p.next().Pos, p.next().Line)
for p.isStatement() {
result.AddStatement(p.parseStatement())
}
return result
}
// statement : mustache | block | rawBlock | partial | content | COMMENT
func (p *parser) parseStatement() ast.Node {
var result ast.Node
tok := p.next()
switch tok.Kind {
case lexer.TokenOpen, lexer.TokenOpenUnescaped:
// mustache
result = p.parseMustache()
case lexer.TokenOpenBlock:
// block
result = p.parseBlock()
case lexer.TokenOpenInverse:
// block
result = p.parseInverse()
case lexer.TokenOpenRawBlock:
// rawBlock
result = p.parseRawBlock()
case lexer.TokenOpenPartial:
// partial
result = p.parsePartial()
case lexer.TokenContent:
// content
result = p.parseContent()
case lexer.TokenComment:
// COMMENT
result = p.parseComment()
}
return result
}
// isStatement returns true if next token starts a statement
func (p *parser) isStatement() bool {
if !p.have(1) {
return false
}
switch p.next().Kind {
case lexer.TokenOpen, lexer.TokenOpenUnescaped, lexer.TokenOpenBlock,
lexer.TokenOpenInverse, lexer.TokenOpenRawBlock, lexer.TokenOpenPartial,
lexer.TokenContent, lexer.TokenComment:
return true
}
return false
}
// content : CONTENT
func (p *parser) parseContent() *ast.ContentStatement {
// CONTENT
tok := p.shift()
if tok.Kind != lexer.TokenContent {
// @todo This check can be removed if content is optional in a raw block
errExpected(lexer.TokenContent, tok)
}
return ast.NewContentStatement(tok.Pos, tok.Line, tok.Val)
}
// COMMENT
func (p *parser) parseComment() *ast.CommentStatement {
// COMMENT
tok := p.shift()
value := rOpenComment.ReplaceAllString(tok.Val, "")
value = rCloseComment.ReplaceAllString(value, "")
result := ast.NewCommentStatement(tok.Pos, tok.Line, value)
result.Strip = ast.NewStripForStr(tok.Val)
return result
}
// param* hash?
func (p *parser) parseExpressionParamsHash() ([]ast.Node, *ast.Hash) {
var params []ast.Node
var hash *ast.Hash
// params*
if p.isParam() {
params = p.parseParams()
}
// hash?
if p.isHashSegment() {
hash = p.parseHash()
}
return params, hash
}
// helperName param* hash?
func (p *parser) parseExpression(tok *lexer.Token) *ast.Expression {
result := ast.NewExpression(tok.Pos, tok.Line)
// helperName
result.Path = p.parseHelperName()
// param* hash?
result.Params, result.Hash = p.parseExpressionParamsHash()
return result
}
// rawBlock : openRawBlock content endRawBlock
// openRawBlock : OPEN_RAW_BLOCK helperName param* hash? CLOSE_RAW_BLOCK
// endRawBlock : OPEN_END_RAW_BLOCK helperName CLOSE_RAW_BLOCK
func (p *parser) parseRawBlock() *ast.BlockStatement {
// OPEN_RAW_BLOCK
tok := p.shift()
result := ast.NewBlockStatement(tok.Pos, tok.Line)
// helperName param* hash?
result.Expression = p.parseExpression(tok)
openName := result.Expression.Canonical()
// CLOSE_RAW_BLOCK
tok = p.shift()
if tok.Kind != lexer.TokenCloseRawBlock {
errExpected(lexer.TokenCloseRawBlock, tok)
}
// content
// @todo Is content mandatory in a raw block ?
content := p.parseContent()
program := ast.NewProgram(tok.Pos, tok.Line)
program.AddStatement(content)
result.Program = program
// OPEN_END_RAW_BLOCK
tok = p.shift()
if tok.Kind != lexer.TokenOpenEndRawBlock {
// should never happen as it is caught by lexer
errExpected(lexer.TokenOpenEndRawBlock, tok)
}
// helperName
endID := p.parseHelperName()
closeName, ok := ast.HelperNameStr(endID)
if !ok {
errNode(endID, "Erroneous closing expression")
}
if openName != closeName {
errNode(endID, fmt.Sprintf("%s doesn't match %s", openName, closeName))
}
// CLOSE_RAW_BLOCK
tok = p.shift()
if tok.Kind != lexer.TokenCloseRawBlock {
errExpected(lexer.TokenCloseRawBlock, tok)
}
return result
}
// block : openBlock program inverseChain? closeBlock
func (p *parser) parseBlock() *ast.BlockStatement {
// openBlock
result, blockParams := p.parseOpenBlock()
// program
program := p.parseProgram()
program.BlockParams = blockParams
result.Program = program
// inverseChain?
if p.isInverseChain() {
result.Inverse = p.parseInverseChain()
}
// closeBlock
p.parseCloseBlock(result)
setBlockInverseStrip(result)
return result
}
// setBlockInverseStrip is called when parsing `block` (openBlock | openInverse) and `inverseChain`
//
// TODO: This was totally cargo culted ! CHECK THAT !
//
// cf. prepareBlock() in:
// https://github.com/wycats/handlebars.js/blob/master/lib/handlebars/compiler/helper.js
func setBlockInverseStrip(block *ast.BlockStatement) {
if block.Inverse == nil {
return
}
if block.Inverse.Chained {
b, _ := block.Inverse.Body[0].(*ast.BlockStatement)
b.CloseStrip = block.CloseStrip
}
block.InverseStrip = block.Inverse.Strip
}
// block : openInverse program inverseAndProgram? closeBlock
func (p *parser) parseInverse() *ast.BlockStatement {
// openInverse
result, blockParams := p.parseOpenBlock()
// program
program := p.parseProgram()
program.BlockParams = blockParams
result.Inverse = program
// inverseAndProgram?
if p.isInverse() {
result.Program = p.parseInverseAndProgram()
}
// closeBlock
p.parseCloseBlock(result)
setBlockInverseStrip(result)
return result
}
// helperName param* hash? blockParams?
func (p *parser) parseOpenBlockExpression(tok *lexer.Token) (*ast.BlockStatement, []string) {
var blockParams []string
result := ast.NewBlockStatement(tok.Pos, tok.Line)
// helperName param* hash?
result.Expression = p.parseExpression(tok)
// blockParams?
if p.isBlockParams() {
blockParams = p.parseBlockParams()
}
// named returned values
return result, blockParams
}
// inverseChain : openInverseChain program inverseChain?
// | inverseAndProgram
func (p *parser) parseInverseChain() *ast.Program {
if p.isInverse() {
// inverseAndProgram
return p.parseInverseAndProgram()
}
result := ast.NewProgram(p.next().Pos, p.next().Line)
// openInverseChain
block, blockParams := p.parseOpenBlock()
// program
program := p.parseProgram()
program.BlockParams = blockParams
block.Program = program
// inverseChain?
if p.isInverseChain() {
block.Inverse = p.parseInverseChain()
}
setBlockInverseStrip(block)
result.Chained = true
result.AddStatement(block)
return result
}
// Returns true if current token starts an inverse chain
func (p *parser) isInverseChain() bool {
return p.isOpenInverseChain() || p.isInverse()
}
// inverseAndProgram : INVERSE program
func (p *parser) parseInverseAndProgram() *ast.Program {
// INVERSE
tok := p.shift()
// program
result := p.parseProgram()
result.Strip = ast.NewStripForStr(tok.Val)
return result
}
// openBlock : OPEN_BLOCK helperName param* hash? blockParams? CLOSE
// openInverse : OPEN_INVERSE helperName param* hash? blockParams? CLOSE
// openInverseChain: OPEN_INVERSE_CHAIN helperName param* hash? blockParams? CLOSE
func (p *parser) parseOpenBlock() (*ast.BlockStatement, []string) {
// OPEN_BLOCK | OPEN_INVERSE | OPEN_INVERSE_CHAIN
tok := p.shift()
// helperName param* hash? blockParams?
result, blockParams := p.parseOpenBlockExpression(tok)
// CLOSE
tokClose := p.shift()
if tokClose.Kind != lexer.TokenClose {
errExpected(lexer.TokenClose, tokClose)
}
result.OpenStrip = ast.NewStrip(tok.Val, tokClose.Val)
// named returned values
return result, blockParams
}
// closeBlock : OPEN_ENDBLOCK helperName CLOSE
func (p *parser) parseCloseBlock(block *ast.BlockStatement) {
// OPEN_ENDBLOCK
tok := p.shift()
if tok.Kind != lexer.TokenOpenEndBlock {
errExpected(lexer.TokenOpenEndBlock, tok)
}
// helperName
endID := p.parseHelperName()
closeName, ok := ast.HelperNameStr(endID)
if !ok {
errNode(endID, "Erroneous closing expression")
}
openName := block.Expression.Canonical()
if openName != closeName {
errNode(endID, fmt.Sprintf("%s doesn't match %s", openName, closeName))
}
// CLOSE
tokClose := p.shift()
if tokClose.Kind != lexer.TokenClose {
errExpected(lexer.TokenClose, tokClose)
}
block.CloseStrip = ast.NewStrip(tok.Val, tokClose.Val)
}
// mustache : OPEN helperName param* hash? CLOSE
// | OPEN_UNESCAPED helperName param* hash? CLOSE_UNESCAPED
func (p *parser) parseMustache() *ast.MustacheStatement {
// OPEN | OPEN_UNESCAPED
tok := p.shift()
closeToken := lexer.TokenClose
if tok.Kind == lexer.TokenOpenUnescaped {
closeToken = lexer.TokenCloseUnescaped
}
unescaped := false
if (tok.Kind == lexer.TokenOpenUnescaped) || (rOpenAmp.MatchString(tok.Val)) {
unescaped = true
}
result := ast.NewMustacheStatement(tok.Pos, tok.Line, unescaped)
// helperName param* hash?
result.Expression = p.parseExpression(tok)
// CLOSE | CLOSE_UNESCAPED
tokClose := p.shift()
if tokClose.Kind != closeToken {
errExpected(closeToken, tokClose)
}
result.Strip = ast.NewStrip(tok.Val, tokClose.Val)
return result
}
// partial : OPEN_PARTIAL partialName param* hash? CLOSE
func (p *parser) parsePartial() *ast.PartialStatement {
// OPEN_PARTIAL
tok := p.shift()
result := ast.NewPartialStatement(tok.Pos, tok.Line)
// partialName
result.Name = p.parsePartialName()
// param* hash?
result.Params, result.Hash = p.parseExpressionParamsHash()
// CLOSE
tokClose := p.shift()
if tokClose.Kind != lexer.TokenClose {
errExpected(lexer.TokenClose, tokClose)
}
result.Strip = ast.NewStrip(tok.Val, tokClose.Val)
return result
}
// helperName | sexpr
func (p *parser) parseHelperNameOrSexpr() ast.Node {
if p.isSexpr() {
// sexpr
return p.parseSexpr()
}
// helperName
return p.parseHelperName()
}
// param : helperName | sexpr
func (p *parser) parseParam() ast.Node {
return p.parseHelperNameOrSexpr()
}
// Returns true if next tokens represent a `param`
func (p *parser) isParam() bool {
return (p.isSexpr() || p.isHelperName()) && !p.isHashSegment()
}
// param*
func (p *parser) parseParams() []ast.Node {
var result []ast.Node
for p.isParam() {
result = append(result, p.parseParam())
}
return result
}
// sexpr : OPEN_SEXPR helperName param* hash? CLOSE_SEXPR
func (p *parser) parseSexpr() *ast.SubExpression {
// OPEN_SEXPR
tok := p.shift()
result := ast.NewSubExpression(tok.Pos, tok.Line)
// helperName param* hash?
result.Expression = p.parseExpression(tok)
// CLOSE_SEXPR
tok = p.shift()
if tok.Kind != lexer.TokenCloseSexpr {
errExpected(lexer.TokenCloseSexpr, tok)
}
return result
}
// hash : hashSegment+
func (p *parser) parseHash() *ast.Hash {
var pairs []*ast.HashPair
for p.isHashSegment() {
pairs = append(pairs, p.parseHashSegment())
}
firstLoc := pairs[0].Location()
result := ast.NewHash(firstLoc.Pos, firstLoc.Line)
result.Pairs = pairs
return result
}
// returns true if next tokens represents a `hashSegment`
func (p *parser) isHashSegment() bool {
return p.have(2) && (p.next().Kind == lexer.TokenID) && (p.nextAt(1).Kind == lexer.TokenEquals)
}
// hashSegment : ID EQUALS param
func (p *parser) parseHashSegment() *ast.HashPair {
// ID
tok := p.shift()
// EQUALS
p.shift()
// param
param := p.parseParam()
result := ast.NewHashPair(tok.Pos, tok.Line)
result.Key = tok.Val
result.Val = param
return result
}
// blockParams : OPEN_BLOCK_PARAMS ID+ CLOSE_BLOCK_PARAMS
func (p *parser) parseBlockParams() []string {
var result []string
// OPEN_BLOCK_PARAMS
tok := p.shift()
// ID+
for p.isID() {
result = append(result, p.shift().Val)
}
if len(result) == 0 {
errExpected(lexer.TokenID, p.next())
}
// CLOSE_BLOCK_PARAMS
tok = p.shift()
if tok.Kind != lexer.TokenCloseBlockParams {
errExpected(lexer.TokenCloseBlockParams, tok)
}
return result
}
// helperName : path | dataName | STRING | NUMBER | BOOLEAN | UNDEFINED | NULL
func (p *parser) parseHelperName() ast.Node {
var result ast.Node
tok := p.next()
switch tok.Kind {
case lexer.TokenBoolean:
// BOOLEAN
p.shift()
result = ast.NewBooleanLiteral(tok.Pos, tok.Line, (tok.Val == "true"), tok.Val)
case lexer.TokenNumber:
// NUMBER
p.shift()
val, isInt := parseNumber(tok)
result = ast.NewNumberLiteral(tok.Pos, tok.Line, val, isInt, tok.Val)
case lexer.TokenString:
// STRING
p.shift()
result = ast.NewStringLiteral(tok.Pos, tok.Line, tok.Val)
case lexer.TokenData:
// dataName
result = p.parseDataName()
default:
// path
result = p.parsePath(false)
}
return result
}
// parseNumber parses a number
func parseNumber(tok *lexer.Token) (result float64, isInt bool) {
var valInt int
var err error
valInt, err = strconv.Atoi(tok.Val)
if err == nil {
isInt = true
result = float64(valInt)
} else {
isInt = false
result, err = strconv.ParseFloat(tok.Val, 64)
if err != nil {
errToken(tok, fmt.Sprintf("Failed to parse number: %s", tok.Val))
}
}
// named returned values
return
}
// Returns true if next tokens represent a `helperName`
func (p *parser) isHelperName() bool {
switch p.next().Kind {
case lexer.TokenBoolean, lexer.TokenNumber, lexer.TokenString, lexer.TokenData, lexer.TokenID:
return true
}
return false
}
// partialName : helperName | sexpr
func (p *parser) parsePartialName() ast.Node {
return p.parseHelperNameOrSexpr()
}
// dataName : DATA pathSegments
func (p *parser) parseDataName() *ast.PathExpression {
// DATA
p.shift()
// pathSegments
return p.parsePath(true)
}
// path : pathSegments
// pathSegments : pathSegments SEP ID
// | ID
func (p *parser) parsePath(data bool) *ast.PathExpression {
var tok *lexer.Token
// ID
tok = p.shift()
if tok.Kind != lexer.TokenID {
errExpected(lexer.TokenID, tok)
}
result := ast.NewPathExpression(tok.Pos, tok.Line, data)
result.Part(tok.Val)
for p.isPathSep() {
// SEP
tok = p.shift()
result.Sep(tok.Val)
// ID
tok = p.shift()
if tok.Kind != lexer.TokenID {
errExpected(lexer.TokenID, tok)
}
result.Part(tok.Val)
if len(result.Parts) > 0 {
switch tok.Val {
case "..", ".", "this":
errToken(tok, "Invalid path: "+result.Original)
}
}
}
return result
}
// Ensures there is token to parse at given index
func (p *parser) ensure(index int) {
if p.lexOver {
// nothing more to grab
return
}
nb := index + 1
for len(p.tokens) < nb {
// fetch next token
tok := p.lex.NextToken()
// queue it
p.tokens = append(p.tokens, &tok)
if (tok.Kind == lexer.TokenEOF) || (tok.Kind == lexer.TokenError) {
p.lexOver = true
break
}
}
}
// have returns true is there are a list given number of tokens to consume left
func (p *parser) have(nb int) bool {
p.ensure(nb - 1)
return len(p.tokens) >= nb
}
// nextAt returns next token at given index, without consuming it
func (p *parser) nextAt(index int) *lexer.Token {
p.ensure(index)
return p.tokens[index]
}
// next returns next token without consuming it
func (p *parser) next() *lexer.Token {
return p.nextAt(0)
}
// shift returns next token and remove it from the tokens buffer
//
// Panics if next token is `TokenError`
func (p *parser) shift() *lexer.Token {
var result *lexer.Token
p.ensure(0)
result, p.tokens = p.tokens[0], p.tokens[1:]
// check error token
if result.Kind == lexer.TokenError {
errToken(result, "Lexer error")
}
return result
}
// isToken returns true if next token is of given type
func (p *parser) isToken(kind lexer.TokenKind) bool {
return p.have(1) && p.next().Kind == kind
}
// isSexpr returns true if next token starts a sexpr
func (p *parser) isSexpr() bool {
return p.isToken(lexer.TokenOpenSexpr)
}
// isPathSep returns true if next token is a path separator
func (p *parser) isPathSep() bool {
return p.isToken(lexer.TokenSep)
}
// isID returns true if next token is an ID
func (p *parser) isID() bool {
return p.isToken(lexer.TokenID)
}
// isBlockParams returns true if next token starts a block params
func (p *parser) isBlockParams() bool {
return p.isToken(lexer.TokenOpenBlockParams)
}
// isInverse returns true if next token starts an INVERSE sequence
func (p *parser) isInverse() bool {
return p.isToken(lexer.TokenInverse)
}
// isOpenInverseChain returns true if next token is OPEN_INVERSE_CHAIN
func (p *parser) isOpenInverseChain() bool {
return p.isToken(lexer.TokenOpenInverseChain)
}

360
vendor/github.com/aymerick/raymond/parser/whitespace.go generated vendored Normal file
View file

@ -0,0 +1,360 @@
package parser
import (
"regexp"
"github.com/aymerick/raymond/ast"
)
// whitespaceVisitor walks through the AST to perform whitespace control
//
// The logic was shamelessly borrowed from:
// https://github.com/wycats/handlebars.js/blob/master/lib/handlebars/compiler/whitespace-control.js
type whitespaceVisitor struct {
isRootSeen bool
}
var (
rTrimLeft = regexp.MustCompile(`^[ \t]*\r?\n?`)
rTrimLeftMultiple = regexp.MustCompile(`^\s+`)
rTrimRight = regexp.MustCompile(`[ \t]+$`)
rTrimRightMultiple = regexp.MustCompile(`\s+$`)
rPrevWhitespace = regexp.MustCompile(`\r?\n\s*?$`)
rPrevWhitespaceStart = regexp.MustCompile(`(^|\r?\n)\s*?$`)
rNextWhitespace = regexp.MustCompile(`^\s*?\r?\n`)
rNextWhitespaceEnd = regexp.MustCompile(`^\s*?(\r?\n|$)`)
rPartialIndent = regexp.MustCompile(`([ \t]+$)`)
)
// newWhitespaceVisitor instanciates a new whitespaceVisitor
func newWhitespaceVisitor() *whitespaceVisitor {
return &whitespaceVisitor{}
}
// processWhitespaces performs whitespace control on given AST
//
// WARNING: It must be called only once on AST.
func processWhitespaces(node ast.Node) {
node.Accept(newWhitespaceVisitor())
}
func omitRightFirst(body []ast.Node, multiple bool) {
omitRight(body, -1, multiple)
}
func omitRight(body []ast.Node, i int, multiple bool) {
if i+1 >= len(body) {
return
}
current := body[i+1]
node, ok := current.(*ast.ContentStatement)
if !ok {
return
}
if !multiple && node.RightStripped {
return
}
original := node.Value
r := rTrimLeft
if multiple {
r = rTrimLeftMultiple
}
node.Value = r.ReplaceAllString(node.Value, "")
node.RightStripped = (original != node.Value)
}
func omitLeftLast(body []ast.Node, multiple bool) {
omitLeft(body, len(body), multiple)
}
func omitLeft(body []ast.Node, i int, multiple bool) bool {
if i-1 < 0 {
return false
}
current := body[i-1]
node, ok := current.(*ast.ContentStatement)
if !ok {
return false
}
if !multiple && node.LeftStripped {
return false
}
original := node.Value
r := rTrimRight
if multiple {
r = rTrimRightMultiple
}
node.Value = r.ReplaceAllString(node.Value, "")
node.LeftStripped = (original != node.Value)
return node.LeftStripped
}
func isPrevWhitespace(body []ast.Node) bool {
return isPrevWhitespaceProgram(body, len(body), false)
}
func isPrevWhitespaceProgram(body []ast.Node, i int, isRoot bool) bool {
if i < 1 {
return isRoot
}
prev := body[i-1]
if node, ok := prev.(*ast.ContentStatement); ok {
if (node.Value == "") && node.RightStripped {
// already stripped, so it may be an empty string not catched by regexp
return true
}
r := rPrevWhitespaceStart
if (i > 1) || !isRoot {
r = rPrevWhitespace
}
return r.MatchString(node.Value)
}
return false
}
func isNextWhitespace(body []ast.Node) bool {
return isNextWhitespaceProgram(body, -1, false)
}
func isNextWhitespaceProgram(body []ast.Node, i int, isRoot bool) bool {
if i+1 >= len(body) {
return isRoot
}
next := body[i+1]
if node, ok := next.(*ast.ContentStatement); ok {
if (node.Value == "") && node.LeftStripped {
// already stripped, so it may be an empty string not catched by regexp
return true
}
r := rNextWhitespaceEnd
if (i+2 > len(body)) || !isRoot {
r = rNextWhitespace
}
return r.MatchString(node.Value)
}
return false
}
//
// Visitor interface
//
func (v *whitespaceVisitor) VisitProgram(program *ast.Program) interface{} {
isRoot := !v.isRootSeen
v.isRootSeen = true
body := program.Body
for i, current := range body {
strip, _ := current.Accept(v).(*ast.Strip)
if strip == nil {
continue
}
_isPrevWhitespace := isPrevWhitespaceProgram(body, i, isRoot)
_isNextWhitespace := isNextWhitespaceProgram(body, i, isRoot)
openStandalone := strip.OpenStandalone && _isPrevWhitespace
closeStandalone := strip.CloseStandalone && _isNextWhitespace
inlineStandalone := strip.InlineStandalone && _isPrevWhitespace && _isNextWhitespace
if strip.Close {
omitRight(body, i, true)
}
if strip.Open && (i > 0) {
omitLeft(body, i, true)
}
if inlineStandalone {
omitRight(body, i, false)
if omitLeft(body, i, false) {
// If we are on a standalone node, save the indent info for partials
if partial, ok := current.(*ast.PartialStatement); ok {
// Pull out the whitespace from the final line
if i > 0 {
if prevContent, ok := body[i-1].(*ast.ContentStatement); ok {
partial.Indent = rPartialIndent.FindString(prevContent.Original)
}
}
}
}
}
if b, ok := current.(*ast.BlockStatement); ok {
if openStandalone {
prog := b.Program
if prog == nil {
prog = b.Inverse
}
omitRightFirst(prog.Body, false)
// Strip out the previous content node if it's whitespace only
omitLeft(body, i, false)
}
if closeStandalone {
prog := b.Inverse
if prog == nil {
prog = b.Program
}
// Always strip the next node
omitRight(body, i, false)
omitLeftLast(prog.Body, false)
}
}
}
return nil
}
func (v *whitespaceVisitor) VisitBlock(block *ast.BlockStatement) interface{} {
if block.Program != nil {
block.Program.Accept(v)
}
if block.Inverse != nil {
block.Inverse.Accept(v)
}
program := block.Program
inverse := block.Inverse
if program == nil {
program = inverse
inverse = nil
}
firstInverse := inverse
lastInverse := inverse
if (inverse != nil) && inverse.Chained {
b, _ := inverse.Body[0].(*ast.BlockStatement)
firstInverse = b.Program
for lastInverse.Chained {
b, _ := lastInverse.Body[len(lastInverse.Body)-1].(*ast.BlockStatement)
lastInverse = b.Program
}
}
closeProg := firstInverse
if closeProg == nil {
closeProg = program
}
strip := &ast.Strip{
Open: (block.OpenStrip != nil) && block.OpenStrip.Open,
Close: (block.CloseStrip != nil) && block.CloseStrip.Close,
OpenStandalone: isNextWhitespace(program.Body),
CloseStandalone: isPrevWhitespace(closeProg.Body),
}
if (block.OpenStrip != nil) && block.OpenStrip.Close {
omitRightFirst(program.Body, true)
}
if inverse != nil {
if block.InverseStrip != nil {
inverseStrip := block.InverseStrip
if inverseStrip.Open {
omitLeftLast(program.Body, true)
}
if inverseStrip.Close {
omitRightFirst(firstInverse.Body, true)
}
}
if (block.CloseStrip != nil) && block.CloseStrip.Open {
omitLeftLast(lastInverse.Body, true)
}
// Find standalone else statements
if isPrevWhitespace(program.Body) && isNextWhitespace(firstInverse.Body) {
omitLeftLast(program.Body, false)
omitRightFirst(firstInverse.Body, false)
}
} else if (block.CloseStrip != nil) && block.CloseStrip.Open {
omitLeftLast(program.Body, true)
}
return strip
}
func (v *whitespaceVisitor) VisitMustache(mustache *ast.MustacheStatement) interface{} {
return mustache.Strip
}
func _inlineStandalone(strip *ast.Strip) interface{} {
return &ast.Strip{
Open: strip.Open,
Close: strip.Close,
InlineStandalone: true,
}
}
func (v *whitespaceVisitor) VisitPartial(node *ast.PartialStatement) interface{} {
strip := node.Strip
if strip == nil {
strip = &ast.Strip{}
}
return _inlineStandalone(strip)
}
func (v *whitespaceVisitor) VisitComment(node *ast.CommentStatement) interface{} {
strip := node.Strip
if strip == nil {
strip = &ast.Strip{}
}
return _inlineStandalone(strip)
}
// NOOP
func (v *whitespaceVisitor) VisitContent(node *ast.ContentStatement) interface{} { return nil }
func (v *whitespaceVisitor) VisitExpression(node *ast.Expression) interface{} { return nil }
func (v *whitespaceVisitor) VisitSubExpression(node *ast.SubExpression) interface{} { return nil }
func (v *whitespaceVisitor) VisitPath(node *ast.PathExpression) interface{} { return nil }
func (v *whitespaceVisitor) VisitString(node *ast.StringLiteral) interface{} { return nil }
func (v *whitespaceVisitor) VisitBoolean(node *ast.BooleanLiteral) interface{} { return nil }
func (v *whitespaceVisitor) VisitNumber(node *ast.NumberLiteral) interface{} { return nil }
func (v *whitespaceVisitor) VisitHash(node *ast.Hash) interface{} { return nil }
func (v *whitespaceVisitor) VisitHashPair(node *ast.HashPair) interface{} { return nil }

85
vendor/github.com/aymerick/raymond/partial.go generated vendored Normal file
View file

@ -0,0 +1,85 @@
package raymond
import (
"fmt"
"sync"
)
// partial represents a partial template
type partial struct {
name string
source string
tpl *Template
}
// partials stores all global partials
var partials map[string]*partial
// protects global partials
var partialsMutex sync.RWMutex
func init() {
partials = make(map[string]*partial)
}
// newPartial instanciates a new partial
func newPartial(name string, source string, tpl *Template) *partial {
return &partial{
name: name,
source: source,
tpl: tpl,
}
}
// RegisterPartial registers a global partial. That partial will be available to all templates.
func RegisterPartial(name string, source string) {
partialsMutex.Lock()
defer partialsMutex.Unlock()
if partials[name] != nil {
panic(fmt.Errorf("Partial already registered: %s", name))
}
partials[name] = newPartial(name, source, nil)
}
// RegisterPartials registers several global partials. Those partials will be available to all templates.
func RegisterPartials(partials map[string]string) {
for name, p := range partials {
RegisterPartial(name, p)
}
}
// RegisterPartialTemplate registers a global partial with given parsed template. That partial will be available to all templates.
func RegisterPartialTemplate(name string, tpl *Template) {
partialsMutex.Lock()
defer partialsMutex.Unlock()
if partials[name] != nil {
panic(fmt.Errorf("Partial already registered: %s", name))
}
partials[name] = newPartial(name, "", tpl)
}
// findPartial finds a registered global partial
func findPartial(name string) *partial {
partialsMutex.RLock()
defer partialsMutex.RUnlock()
return partials[name]
}
// template returns parsed partial template
func (p *partial) template() (*Template, error) {
if p.tpl == nil {
var err error
p.tpl, err = Parse(p.source)
if err != nil {
return nil, err
}
}
return p.tpl, nil
}

28
vendor/github.com/aymerick/raymond/raymond.go generated vendored Normal file
View file

@ -0,0 +1,28 @@
// Package raymond provides handlebars evaluation
package raymond
// Render parses a template and evaluates it with given context
//
// Note that this function call is not optimal as your template is parsed everytime you call it. You should use Parse() function instead.
func Render(source string, ctx interface{}) (string, error) {
// parse template
tpl, err := Parse(source)
if err != nil {
return "", err
}
// renders template
str, err := tpl.Exec(ctx)
if err != nil {
return "", err
}
return str, nil
}
// MustRender parses a template and evaluates it with given context. It panics on error.
//
// Note that this function call is not optimal as your template is parsed everytime you call it. You should use Parse() function instead.
func MustRender(source string, ctx interface{}) string {
return MustParse(source).MustExec(ctx)
}

BIN
vendor/github.com/aymerick/raymond/raymond.png generated vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

84
vendor/github.com/aymerick/raymond/string.go generated vendored Normal file
View file

@ -0,0 +1,84 @@
package raymond
import (
"fmt"
"reflect"
"strconv"
)
// SafeString represents a string that must not be escaped.
//
// A SafeString can be returned by helpers to disable escaping.
type SafeString string
// isSafeString returns true if argument is a SafeString
func isSafeString(value interface{}) bool {
if _, ok := value.(SafeString); ok {
return true
}
return false
}
// Str returns string representation of any basic type value.
func Str(value interface{}) string {
return strValue(reflect.ValueOf(value))
}
// strValue returns string representation of a reflect.Value
func strValue(value reflect.Value) string {
result := ""
ival, ok := printableValue(value)
if !ok {
panic(fmt.Errorf("Can't print value: %q", value))
}
val := reflect.ValueOf(ival)
switch val.Kind() {
case reflect.Array, reflect.Slice:
for i := 0; i < val.Len(); i++ {
result += strValue(val.Index(i))
}
case reflect.Bool:
result = "false"
if val.Bool() {
result = "true"
}
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
result = fmt.Sprintf("%d", ival)
case reflect.Float32, reflect.Float64:
result = strconv.FormatFloat(val.Float(), 'f', -1, 64)
case reflect.Invalid:
result = ""
default:
result = fmt.Sprintf("%s", ival)
}
return result
}
// printableValue returns the, possibly indirected, interface value inside v that
// is best for a call to formatted printer.
//
// NOTE: borrowed from https://github.com/golang/go/tree/master/src/text/template/exec.go
func printableValue(v reflect.Value) (interface{}, bool) {
if v.Kind() == reflect.Ptr {
v, _ = indirect(v) // fmt.Fprint handles nil.
}
if !v.IsValid() {
return "", true
}
if !v.Type().Implements(errorType) && !v.Type().Implements(fmtStringerType) {
if v.CanAddr() && (reflect.PtrTo(v.Type()).Implements(errorType) || reflect.PtrTo(v.Type()).Implements(fmtStringerType)) {
v = v.Addr()
} else {
switch v.Kind() {
case reflect.Chan, reflect.Func:
return nil, false
}
}
}
return v.Interface(), true
}

248
vendor/github.com/aymerick/raymond/template.go generated vendored Normal file
View file

@ -0,0 +1,248 @@
package raymond
import (
"fmt"
"io/ioutil"
"reflect"
"runtime"
"sync"
"github.com/aymerick/raymond/ast"
"github.com/aymerick/raymond/parser"
)
// Template represents a handlebars template.
type Template struct {
source string
program *ast.Program
helpers map[string]reflect.Value
partials map[string]*partial
mutex sync.RWMutex // protects helpers and partials
}
// newTemplate instanciate a new template without parsing it
func newTemplate(source string) *Template {
return &Template{
source: source,
helpers: make(map[string]reflect.Value),
partials: make(map[string]*partial),
}
}
// Parse instanciates a template by parsing given source.
func Parse(source string) (*Template, error) {
tpl := newTemplate(source)
// parse template
if err := tpl.parse(); err != nil {
return nil, err
}
return tpl, nil
}
// MustParse instanciates a template by parsing given source. It panics on error.
func MustParse(source string) *Template {
result, err := Parse(source)
if err != nil {
panic(err)
}
return result
}
// ParseFile reads given file and returns parsed template.
func ParseFile(filePath string) (*Template, error) {
b, err := ioutil.ReadFile(filePath)
if err != nil {
return nil, err
}
return Parse(string(b))
}
// parse parses the template
//
// It can be called several times, the parsing will be done only once.
func (tpl *Template) parse() error {
if tpl.program == nil {
var err error
tpl.program, err = parser.Parse(tpl.source)
if err != nil {
return err
}
}
return nil
}
// Clone returns a copy of that template.
func (tpl *Template) Clone() *Template {
result := newTemplate(tpl.source)
result.program = tpl.program
tpl.mutex.RLock()
defer tpl.mutex.RUnlock()
for name, helper := range tpl.helpers {
result.RegisterHelper(name, helper.Interface())
}
for name, partial := range tpl.partials {
result.addPartial(name, partial.source, partial.tpl)
}
return result
}
func (tpl *Template) findHelper(name string) reflect.Value {
tpl.mutex.RLock()
defer tpl.mutex.RUnlock()
return tpl.helpers[name]
}
// RegisterHelper registers a helper for that template.
func (tpl *Template) RegisterHelper(name string, helper interface{}) {
tpl.mutex.Lock()
defer tpl.mutex.Unlock()
if tpl.helpers[name] != zero {
panic(fmt.Sprintf("Helper %s already registered", name))
}
val := reflect.ValueOf(helper)
ensureValidHelper(name, val)
tpl.helpers[name] = val
}
// RegisterHelpers registers several helpers for that template.
func (tpl *Template) RegisterHelpers(helpers map[string]interface{}) {
for name, helper := range helpers {
tpl.RegisterHelper(name, helper)
}
}
func (tpl *Template) addPartial(name string, source string, template *Template) {
tpl.mutex.Lock()
defer tpl.mutex.Unlock()
if tpl.partials[name] != nil {
panic(fmt.Sprintf("Partial %s already registered", name))
}
tpl.partials[name] = newPartial(name, source, template)
}
func (tpl *Template) findPartial(name string) *partial {
tpl.mutex.RLock()
defer tpl.mutex.RUnlock()
return tpl.partials[name]
}
// RegisterPartial registers a partial for that template.
func (tpl *Template) RegisterPartial(name string, source string) {
tpl.addPartial(name, source, nil)
}
// RegisterPartials registers several partials for that template.
func (tpl *Template) RegisterPartials(partials map[string]string) {
for name, partial := range partials {
tpl.RegisterPartial(name, partial)
}
}
// RegisterPartialFile reads given file and registers its content as a partial with given name.
func (tpl *Template) RegisterPartialFile(filePath string, name string) error {
b, err := ioutil.ReadFile(filePath)
if err != nil {
return err
}
tpl.RegisterPartial(name, string(b))
return nil
}
// RegisterPartialFiles reads several files and registers them as partials, the filename base is used as the partial name.
func (tpl *Template) RegisterPartialFiles(filePaths ...string) error {
if len(filePaths) == 0 {
return nil
}
for _, filePath := range filePaths {
name := fileBase(filePath)
if err := tpl.RegisterPartialFile(filePath, name); err != nil {
return err
}
}
return nil
}
// RegisterPartialTemplate registers an already parsed partial for that template.
func (tpl *Template) RegisterPartialTemplate(name string, template *Template) {
tpl.addPartial(name, "", template)
}
// Exec evaluates template with given context.
func (tpl *Template) Exec(ctx interface{}) (result string, err error) {
return tpl.ExecWith(ctx, nil)
}
// MustExec evaluates template with given context. It panics on error.
func (tpl *Template) MustExec(ctx interface{}) string {
result, err := tpl.Exec(ctx)
if err != nil {
panic(err)
}
return result
}
// ExecWith evaluates template with given context and private data frame.
func (tpl *Template) ExecWith(ctx interface{}, privData *DataFrame) (result string, err error) {
defer errRecover(&err)
// parses template if necessary
err = tpl.parse()
if err != nil {
return
}
// setup visitor
v := newEvalVisitor(tpl, ctx, privData)
// visit AST
result, _ = tpl.program.Accept(v).(string)
// named return values
return
}
// errRecover recovers evaluation panic
func errRecover(errp *error) {
e := recover()
if e != nil {
switch err := e.(type) {
case runtime.Error:
panic(e)
case error:
*errp = err
default:
panic(e)
}
}
}
// PrintAST returns string representation of parsed template.
func (tpl *Template) PrintAST() string {
if err := tpl.parse(); err != nil {
return fmt.Sprintf("PARSER ERROR: %s", err)
}
return ast.Print(tpl.program)
}

85
vendor/github.com/aymerick/raymond/utils.go generated vendored Normal file
View file

@ -0,0 +1,85 @@
package raymond
import (
"path"
"reflect"
)
// indirect returns the item at the end of indirection, and a bool to indicate if it's nil.
// We indirect through pointers and empty interfaces (only) because
// non-empty interfaces have methods we might need.
//
// NOTE: borrowed from https://github.com/golang/go/tree/master/src/text/template/exec.go
func indirect(v reflect.Value) (rv reflect.Value, isNil bool) {
for ; v.Kind() == reflect.Ptr || v.Kind() == reflect.Interface; v = v.Elem() {
if v.IsNil() {
return v, true
}
if v.Kind() == reflect.Interface && v.NumMethod() > 0 {
break
}
}
return v, false
}
// IsTrue returns true if obj is a truthy value.
func IsTrue(obj interface{}) bool {
thruth, ok := isTrueValue(reflect.ValueOf(obj))
if !ok {
return false
}
return thruth
}
// isTrueValue reports whether the value is 'true', in the sense of not the zero of its type,
// and whether the value has a meaningful truth value
//
// NOTE: borrowed from https://github.com/golang/go/tree/master/src/text/template/exec.go
func isTrueValue(val reflect.Value) (truth, ok bool) {
if !val.IsValid() {
// Something like var x interface{}, never set. It's a form of nil.
return false, true
}
switch val.Kind() {
case reflect.Array, reflect.Map, reflect.Slice, reflect.String:
truth = val.Len() > 0
case reflect.Bool:
truth = val.Bool()
case reflect.Complex64, reflect.Complex128:
truth = val.Complex() != 0
case reflect.Chan, reflect.Func, reflect.Ptr, reflect.Interface:
truth = !val.IsNil()
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
truth = val.Int() != 0
case reflect.Float32, reflect.Float64:
truth = val.Float() != 0
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
truth = val.Uint() != 0
case reflect.Struct:
truth = true // Struct values are always true.
default:
return
}
return truth, true
}
// canBeNil reports whether an untyped nil can be assigned to the type. See reflect.Zero.
//
// NOTE: borrowed from https://github.com/golang/go/tree/master/src/text/template/exec.go
func canBeNil(typ reflect.Type) bool {
switch typ.Kind() {
case reflect.Chan, reflect.Func, reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
return true
}
return false
}
// fileBase returns base file name
//
// example: /foo/bar/baz.png => baz
func fileBase(filePath string) string {
fileName := path.Base(filePath)
fileExt := path.Ext(filePath)
return fileName[:len(fileName)-len(fileExt)]
}

15
vendor/github.com/davecgh/go-spew/LICENSE generated vendored Normal file
View file

@ -0,0 +1,15 @@
ISC License
Copyright (c) 2012-2016 Dave Collins <dave@davec.name>
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

152
vendor/github.com/davecgh/go-spew/spew/bypass.go generated vendored Normal file
View file

@ -0,0 +1,152 @@
// Copyright (c) 2015-2016 Dave Collins <dave@davec.name>
//
// Permission to use, copy, modify, and distribute this software for any
// purpose with or without fee is hereby granted, provided that the above
// copyright notice and this permission notice appear in all copies.
//
// THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
// WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
// ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
// WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
// ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
// OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
// NOTE: Due to the following build constraints, this file will only be compiled
// when the code is not running on Google App Engine, compiled by GopherJS, and
// "-tags safe" is not added to the go build command line. The "disableunsafe"
// tag is deprecated and thus should not be used.
// +build !js,!appengine,!safe,!disableunsafe
package spew
import (
"reflect"
"unsafe"
)
const (
// UnsafeDisabled is a build-time constant which specifies whether or
// not access to the unsafe package is available.
UnsafeDisabled = false
// ptrSize is the size of a pointer on the current arch.
ptrSize = unsafe.Sizeof((*byte)(nil))
)
var (
// offsetPtr, offsetScalar, and offsetFlag are the offsets for the
// internal reflect.Value fields. These values are valid before golang
// commit ecccf07e7f9d which changed the format. The are also valid
// after commit 82f48826c6c7 which changed the format again to mirror
// the original format. Code in the init function updates these offsets
// as necessary.
offsetPtr = uintptr(ptrSize)
offsetScalar = uintptr(0)
offsetFlag = uintptr(ptrSize * 2)
// flagKindWidth and flagKindShift indicate various bits that the
// reflect package uses internally to track kind information.
//
// flagRO indicates whether or not the value field of a reflect.Value is
// read-only.
//
// flagIndir indicates whether the value field of a reflect.Value is
// the actual data or a pointer to the data.
//
// These values are valid before golang commit 90a7c3c86944 which
// changed their positions. Code in the init function updates these
// flags as necessary.
flagKindWidth = uintptr(5)
flagKindShift = uintptr(flagKindWidth - 1)
flagRO = uintptr(1 << 0)
flagIndir = uintptr(1 << 1)
)
func init() {
// Older versions of reflect.Value stored small integers directly in the
// ptr field (which is named val in the older versions). Versions
// between commits ecccf07e7f9d and 82f48826c6c7 added a new field named
// scalar for this purpose which unfortunately came before the flag
// field, so the offset of the flag field is different for those
// versions.
//
// This code constructs a new reflect.Value from a known small integer
// and checks if the size of the reflect.Value struct indicates it has
// the scalar field. When it does, the offsets are updated accordingly.
vv := reflect.ValueOf(0xf00)
if unsafe.Sizeof(vv) == (ptrSize * 4) {
offsetScalar = ptrSize * 2
offsetFlag = ptrSize * 3
}
// Commit 90a7c3c86944 changed the flag positions such that the low
// order bits are the kind. This code extracts the kind from the flags
// field and ensures it's the correct type. When it's not, the flag
// order has been changed to the newer format, so the flags are updated
// accordingly.
upf := unsafe.Pointer(uintptr(unsafe.Pointer(&vv)) + offsetFlag)
upfv := *(*uintptr)(upf)
flagKindMask := uintptr((1<<flagKindWidth - 1) << flagKindShift)
if (upfv&flagKindMask)>>flagKindShift != uintptr(reflect.Int) {
flagKindShift = 0
flagRO = 1 << 5
flagIndir = 1 << 6
// Commit adf9b30e5594 modified the flags to separate the
// flagRO flag into two bits which specifies whether or not the
// field is embedded. This causes flagIndir to move over a bit
// and means that flagRO is the combination of either of the
// original flagRO bit and the new bit.
//
// This code detects the change by extracting what used to be
// the indirect bit to ensure it's set. When it's not, the flag
// order has been changed to the newer format, so the flags are
// updated accordingly.
if upfv&flagIndir == 0 {
flagRO = 3 << 5
flagIndir = 1 << 7
}
}
}
// unsafeReflectValue converts the passed reflect.Value into a one that bypasses
// the typical safety restrictions preventing access to unaddressable and
// unexported data. It works by digging the raw pointer to the underlying
// value out of the protected value and generating a new unprotected (unsafe)
// reflect.Value to it.
//
// This allows us to check for implementations of the Stringer and error
// interfaces to be used for pretty printing ordinarily unaddressable and
// inaccessible values such as unexported struct fields.
func unsafeReflectValue(v reflect.Value) (rv reflect.Value) {
indirects := 1
vt := v.Type()
upv := unsafe.Pointer(uintptr(unsafe.Pointer(&v)) + offsetPtr)
rvf := *(*uintptr)(unsafe.Pointer(uintptr(unsafe.Pointer(&v)) + offsetFlag))
if rvf&flagIndir != 0 {
vt = reflect.PtrTo(v.Type())
indirects++
} else if offsetScalar != 0 {
// The value is in the scalar field when it's not one of the
// reference types.
switch vt.Kind() {
case reflect.Uintptr:
case reflect.Chan:
case reflect.Func:
case reflect.Map:
case reflect.Ptr:
case reflect.UnsafePointer:
default:
upv = unsafe.Pointer(uintptr(unsafe.Pointer(&v)) +
offsetScalar)
}
}
pv := reflect.NewAt(vt, upv)
rv = pv
for i := 0; i < indirects; i++ {
rv = rv.Elem()
}
return rv
}

38
vendor/github.com/davecgh/go-spew/spew/bypasssafe.go generated vendored Normal file
View file

@ -0,0 +1,38 @@
// Copyright (c) 2015-2016 Dave Collins <dave@davec.name>
//
// Permission to use, copy, modify, and distribute this software for any
// purpose with or without fee is hereby granted, provided that the above
// copyright notice and this permission notice appear in all copies.
//
// THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
// WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
// ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
// WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
// ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
// OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
// NOTE: Due to the following build constraints, this file will only be compiled
// when the code is running on Google App Engine, compiled by GopherJS, or
// "-tags safe" is added to the go build command line. The "disableunsafe"
// tag is deprecated and thus should not be used.
// +build js appengine safe disableunsafe
package spew
import "reflect"
const (
// UnsafeDisabled is a build-time constant which specifies whether or
// not access to the unsafe package is available.
UnsafeDisabled = true
)
// unsafeReflectValue typically converts the passed reflect.Value into a one
// that bypasses the typical safety restrictions preventing access to
// unaddressable and unexported data. However, doing this relies on access to
// the unsafe package. This is a stub version which simply returns the passed
// reflect.Value when the unsafe package is not available.
func unsafeReflectValue(v reflect.Value) reflect.Value {
return v
}

341
vendor/github.com/davecgh/go-spew/spew/common.go generated vendored Normal file
View file

@ -0,0 +1,341 @@
/*
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package spew
import (
"bytes"
"fmt"
"io"
"reflect"
"sort"
"strconv"
)
// Some constants in the form of bytes to avoid string overhead. This mirrors
// the technique used in the fmt package.
var (
panicBytes = []byte("(PANIC=")
plusBytes = []byte("+")
iBytes = []byte("i")
trueBytes = []byte("true")
falseBytes = []byte("false")
interfaceBytes = []byte("(interface {})")
commaNewlineBytes = []byte(",\n")
newlineBytes = []byte("\n")
openBraceBytes = []byte("{")
openBraceNewlineBytes = []byte("{\n")
closeBraceBytes = []byte("}")
asteriskBytes = []byte("*")
colonBytes = []byte(":")
colonSpaceBytes = []byte(": ")
openParenBytes = []byte("(")
closeParenBytes = []byte(")")
spaceBytes = []byte(" ")
pointerChainBytes = []byte("->")
nilAngleBytes = []byte("<nil>")
maxNewlineBytes = []byte("<max depth reached>\n")
maxShortBytes = []byte("<max>")
circularBytes = []byte("<already shown>")
circularShortBytes = []byte("<shown>")
invalidAngleBytes = []byte("<invalid>")
openBracketBytes = []byte("[")
closeBracketBytes = []byte("]")
percentBytes = []byte("%")
precisionBytes = []byte(".")
openAngleBytes = []byte("<")
closeAngleBytes = []byte(">")
openMapBytes = []byte("map[")
closeMapBytes = []byte("]")
lenEqualsBytes = []byte("len=")
capEqualsBytes = []byte("cap=")
)
// hexDigits is used to map a decimal value to a hex digit.
var hexDigits = "0123456789abcdef"
// catchPanic handles any panics that might occur during the handleMethods
// calls.
func catchPanic(w io.Writer, v reflect.Value) {
if err := recover(); err != nil {
w.Write(panicBytes)
fmt.Fprintf(w, "%v", err)
w.Write(closeParenBytes)
}
}
// handleMethods attempts to call the Error and String methods on the underlying
// type the passed reflect.Value represents and outputes the result to Writer w.
//
// It handles panics in any called methods by catching and displaying the error
// as the formatted value.
func handleMethods(cs *ConfigState, w io.Writer, v reflect.Value) (handled bool) {
// We need an interface to check if the type implements the error or
// Stringer interface. However, the reflect package won't give us an
// interface on certain things like unexported struct fields in order
// to enforce visibility rules. We use unsafe, when it's available,
// to bypass these restrictions since this package does not mutate the
// values.
if !v.CanInterface() {
if UnsafeDisabled {
return false
}
v = unsafeReflectValue(v)
}
// Choose whether or not to do error and Stringer interface lookups against
// the base type or a pointer to the base type depending on settings.
// Technically calling one of these methods with a pointer receiver can
// mutate the value, however, types which choose to satisify an error or
// Stringer interface with a pointer receiver should not be mutating their
// state inside these interface methods.
if !cs.DisablePointerMethods && !UnsafeDisabled && !v.CanAddr() {
v = unsafeReflectValue(v)
}
if v.CanAddr() {
v = v.Addr()
}
// Is it an error or Stringer?
switch iface := v.Interface().(type) {
case error:
defer catchPanic(w, v)
if cs.ContinueOnMethod {
w.Write(openParenBytes)
w.Write([]byte(iface.Error()))
w.Write(closeParenBytes)
w.Write(spaceBytes)
return false
}
w.Write([]byte(iface.Error()))
return true
case fmt.Stringer:
defer catchPanic(w, v)
if cs.ContinueOnMethod {
w.Write(openParenBytes)
w.Write([]byte(iface.String()))
w.Write(closeParenBytes)
w.Write(spaceBytes)
return false
}
w.Write([]byte(iface.String()))
return true
}
return false
}
// printBool outputs a boolean value as true or false to Writer w.
func printBool(w io.Writer, val bool) {
if val {
w.Write(trueBytes)
} else {
w.Write(falseBytes)
}
}
// printInt outputs a signed integer value to Writer w.
func printInt(w io.Writer, val int64, base int) {
w.Write([]byte(strconv.FormatInt(val, base)))
}
// printUint outputs an unsigned integer value to Writer w.
func printUint(w io.Writer, val uint64, base int) {
w.Write([]byte(strconv.FormatUint(val, base)))
}
// printFloat outputs a floating point value using the specified precision,
// which is expected to be 32 or 64bit, to Writer w.
func printFloat(w io.Writer, val float64, precision int) {
w.Write([]byte(strconv.FormatFloat(val, 'g', -1, precision)))
}
// printComplex outputs a complex value using the specified float precision
// for the real and imaginary parts to Writer w.
func printComplex(w io.Writer, c complex128, floatPrecision int) {
r := real(c)
w.Write(openParenBytes)
w.Write([]byte(strconv.FormatFloat(r, 'g', -1, floatPrecision)))
i := imag(c)
if i >= 0 {
w.Write(plusBytes)
}
w.Write([]byte(strconv.FormatFloat(i, 'g', -1, floatPrecision)))
w.Write(iBytes)
w.Write(closeParenBytes)
}
// printHexPtr outputs a uintptr formatted as hexidecimal with a leading '0x'
// prefix to Writer w.
func printHexPtr(w io.Writer, p uintptr) {
// Null pointer.
num := uint64(p)
if num == 0 {
w.Write(nilAngleBytes)
return
}
// Max uint64 is 16 bytes in hex + 2 bytes for '0x' prefix
buf := make([]byte, 18)
// It's simpler to construct the hex string right to left.
base := uint64(16)
i := len(buf) - 1
for num >= base {
buf[i] = hexDigits[num%base]
num /= base
i--
}
buf[i] = hexDigits[num]
// Add '0x' prefix.
i--
buf[i] = 'x'
i--
buf[i] = '0'
// Strip unused leading bytes.
buf = buf[i:]
w.Write(buf)
}
// valuesSorter implements sort.Interface to allow a slice of reflect.Value
// elements to be sorted.
type valuesSorter struct {
values []reflect.Value
strings []string // either nil or same len and values
cs *ConfigState
}
// newValuesSorter initializes a valuesSorter instance, which holds a set of
// surrogate keys on which the data should be sorted. It uses flags in
// ConfigState to decide if and how to populate those surrogate keys.
func newValuesSorter(values []reflect.Value, cs *ConfigState) sort.Interface {
vs := &valuesSorter{values: values, cs: cs}
if canSortSimply(vs.values[0].Kind()) {
return vs
}
if !cs.DisableMethods {
vs.strings = make([]string, len(values))
for i := range vs.values {
b := bytes.Buffer{}
if !handleMethods(cs, &b, vs.values[i]) {
vs.strings = nil
break
}
vs.strings[i] = b.String()
}
}
if vs.strings == nil && cs.SpewKeys {
vs.strings = make([]string, len(values))
for i := range vs.values {
vs.strings[i] = Sprintf("%#v", vs.values[i].Interface())
}
}
return vs
}
// canSortSimply tests whether a reflect.Kind is a primitive that can be sorted
// directly, or whether it should be considered for sorting by surrogate keys
// (if the ConfigState allows it).
func canSortSimply(kind reflect.Kind) bool {
// This switch parallels valueSortLess, except for the default case.
switch kind {
case reflect.Bool:
return true
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
return true
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
return true
case reflect.Float32, reflect.Float64:
return true
case reflect.String:
return true
case reflect.Uintptr:
return true
case reflect.Array:
return true
}
return false
}
// Len returns the number of values in the slice. It is part of the
// sort.Interface implementation.
func (s *valuesSorter) Len() int {
return len(s.values)
}
// Swap swaps the values at the passed indices. It is part of the
// sort.Interface implementation.
func (s *valuesSorter) Swap(i, j int) {
s.values[i], s.values[j] = s.values[j], s.values[i]
if s.strings != nil {
s.strings[i], s.strings[j] = s.strings[j], s.strings[i]
}
}
// valueSortLess returns whether the first value should sort before the second
// value. It is used by valueSorter.Less as part of the sort.Interface
// implementation.
func valueSortLess(a, b reflect.Value) bool {
switch a.Kind() {
case reflect.Bool:
return !a.Bool() && b.Bool()
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
return a.Int() < b.Int()
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
return a.Uint() < b.Uint()
case reflect.Float32, reflect.Float64:
return a.Float() < b.Float()
case reflect.String:
return a.String() < b.String()
case reflect.Uintptr:
return a.Uint() < b.Uint()
case reflect.Array:
// Compare the contents of both arrays.
l := a.Len()
for i := 0; i < l; i++ {
av := a.Index(i)
bv := b.Index(i)
if av.Interface() == bv.Interface() {
continue
}
return valueSortLess(av, bv)
}
}
return a.String() < b.String()
}
// Less returns whether the value at index i should sort before the
// value at index j. It is part of the sort.Interface implementation.
func (s *valuesSorter) Less(i, j int) bool {
if s.strings == nil {
return valueSortLess(s.values[i], s.values[j])
}
return s.strings[i] < s.strings[j]
}
// sortValues is a sort function that handles both native types and any type that
// can be converted to error or Stringer. Other inputs are sorted according to
// their Value.String() value to ensure display stability.
func sortValues(values []reflect.Value, cs *ConfigState) {
if len(values) == 0 {
return
}
sort.Sort(newValuesSorter(values, cs))
}

306
vendor/github.com/davecgh/go-spew/spew/config.go generated vendored Normal file
View file

@ -0,0 +1,306 @@
/*
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package spew
import (
"bytes"
"fmt"
"io"
"os"
)
// ConfigState houses the configuration options used by spew to format and
// display values. There is a global instance, Config, that is used to control
// all top-level Formatter and Dump functionality. Each ConfigState instance
// provides methods equivalent to the top-level functions.
//
// The zero value for ConfigState provides no indentation. You would typically
// want to set it to a space or a tab.
//
// Alternatively, you can use NewDefaultConfig to get a ConfigState instance
// with default settings. See the documentation of NewDefaultConfig for default
// values.
type ConfigState struct {
// Indent specifies the string to use for each indentation level. The
// global config instance that all top-level functions use set this to a
// single space by default. If you would like more indentation, you might
// set this to a tab with "\t" or perhaps two spaces with " ".
Indent string
// MaxDepth controls the maximum number of levels to descend into nested
// data structures. The default, 0, means there is no limit.
//
// NOTE: Circular data structures are properly detected, so it is not
// necessary to set this value unless you specifically want to limit deeply
// nested data structures.
MaxDepth int
// DisableMethods specifies whether or not error and Stringer interfaces are
// invoked for types that implement them.
DisableMethods bool
// DisablePointerMethods specifies whether or not to check for and invoke
// error and Stringer interfaces on types which only accept a pointer
// receiver when the current type is not a pointer.
//
// NOTE: This might be an unsafe action since calling one of these methods
// with a pointer receiver could technically mutate the value, however,
// in practice, types which choose to satisify an error or Stringer
// interface with a pointer receiver should not be mutating their state
// inside these interface methods. As a result, this option relies on
// access to the unsafe package, so it will not have any effect when
// running in environments without access to the unsafe package such as
// Google App Engine or with the "safe" build tag specified.
DisablePointerMethods bool
// DisablePointerAddresses specifies whether to disable the printing of
// pointer addresses. This is useful when diffing data structures in tests.
DisablePointerAddresses bool
// DisableCapacities specifies whether to disable the printing of capacities
// for arrays, slices, maps and channels. This is useful when diffing
// data structures in tests.
DisableCapacities bool
// ContinueOnMethod specifies whether or not recursion should continue once
// a custom error or Stringer interface is invoked. The default, false,
// means it will print the results of invoking the custom error or Stringer
// interface and return immediately instead of continuing to recurse into
// the internals of the data type.
//
// NOTE: This flag does not have any effect if method invocation is disabled
// via the DisableMethods or DisablePointerMethods options.
ContinueOnMethod bool
// SortKeys specifies map keys should be sorted before being printed. Use
// this to have a more deterministic, diffable output. Note that only
// native types (bool, int, uint, floats, uintptr and string) and types
// that support the error or Stringer interfaces (if methods are
// enabled) are supported, with other types sorted according to the
// reflect.Value.String() output which guarantees display stability.
SortKeys bool
// SpewKeys specifies that, as a last resort attempt, map keys should
// be spewed to strings and sorted by those strings. This is only
// considered if SortKeys is true.
SpewKeys bool
}
// Config is the active configuration of the top-level functions.
// The configuration can be changed by modifying the contents of spew.Config.
var Config = ConfigState{Indent: " "}
// Errorf is a wrapper for fmt.Errorf that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the formatted string as a value that satisfies error. See NewFormatter
// for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Errorf(format, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Errorf(format string, a ...interface{}) (err error) {
return fmt.Errorf(format, c.convertArgs(a)...)
}
// Fprint is a wrapper for fmt.Fprint that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprint(w, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Fprint(w io.Writer, a ...interface{}) (n int, err error) {
return fmt.Fprint(w, c.convertArgs(a)...)
}
// Fprintf is a wrapper for fmt.Fprintf that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprintf(w, format, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Fprintf(w io.Writer, format string, a ...interface{}) (n int, err error) {
return fmt.Fprintf(w, format, c.convertArgs(a)...)
}
// Fprintln is a wrapper for fmt.Fprintln that treats each argument as if it
// passed with a Formatter interface returned by c.NewFormatter. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprintln(w, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Fprintln(w io.Writer, a ...interface{}) (n int, err error) {
return fmt.Fprintln(w, c.convertArgs(a)...)
}
// Print is a wrapper for fmt.Print that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Print(c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Print(a ...interface{}) (n int, err error) {
return fmt.Print(c.convertArgs(a)...)
}
// Printf is a wrapper for fmt.Printf that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Printf(format, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Printf(format string, a ...interface{}) (n int, err error) {
return fmt.Printf(format, c.convertArgs(a)...)
}
// Println is a wrapper for fmt.Println that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Println(c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Println(a ...interface{}) (n int, err error) {
return fmt.Println(c.convertArgs(a)...)
}
// Sprint is a wrapper for fmt.Sprint that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprint(c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Sprint(a ...interface{}) string {
return fmt.Sprint(c.convertArgs(a)...)
}
// Sprintf is a wrapper for fmt.Sprintf that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprintf(format, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Sprintf(format string, a ...interface{}) string {
return fmt.Sprintf(format, c.convertArgs(a)...)
}
// Sprintln is a wrapper for fmt.Sprintln that treats each argument as if it
// were passed with a Formatter interface returned by c.NewFormatter. It
// returns the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprintln(c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Sprintln(a ...interface{}) string {
return fmt.Sprintln(c.convertArgs(a)...)
}
/*
NewFormatter returns a custom formatter that satisfies the fmt.Formatter
interface. As a result, it integrates cleanly with standard fmt package
printing functions. The formatter is useful for inline printing of smaller data
types similar to the standard %v format specifier.
The custom formatter only responds to the %v (most compact), %+v (adds pointer
addresses), %#v (adds types), and %#+v (adds types and pointer addresses) verb
combinations. Any other verbs such as %x and %q will be sent to the the
standard fmt package for formatting. In addition, the custom formatter ignores
the width and precision arguments (however they will still work on the format
specifiers not handled by the custom formatter).
Typically this function shouldn't be called directly. It is much easier to make
use of the custom formatter by calling one of the convenience functions such as
c.Printf, c.Println, or c.Printf.
*/
func (c *ConfigState) NewFormatter(v interface{}) fmt.Formatter {
return newFormatter(c, v)
}
// Fdump formats and displays the passed arguments to io.Writer w. It formats
// exactly the same as Dump.
func (c *ConfigState) Fdump(w io.Writer, a ...interface{}) {
fdump(c, w, a...)
}
/*
Dump displays the passed parameters to standard out with newlines, customizable
indentation, and additional debug information such as complete types and all
pointer addresses used to indirect to the final value. It provides the
following features over the built-in printing facilities provided by the fmt
package:
* Pointers are dereferenced and followed
* Circular data structures are detected and handled properly
* Custom Stringer/error interfaces are optionally invoked, including
on unexported types
* Custom types which only implement the Stringer/error interfaces via
a pointer receiver are optionally invoked when passing non-pointer
variables
* Byte arrays and slices are dumped like the hexdump -C command which
includes offsets, byte values in hex, and ASCII output
The configuration options are controlled by modifying the public members
of c. See ConfigState for options documentation.
See Fdump if you would prefer dumping to an arbitrary io.Writer or Sdump to
get the formatted result as a string.
*/
func (c *ConfigState) Dump(a ...interface{}) {
fdump(c, os.Stdout, a...)
}
// Sdump returns a string with the passed arguments formatted exactly the same
// as Dump.
func (c *ConfigState) Sdump(a ...interface{}) string {
var buf bytes.Buffer
fdump(c, &buf, a...)
return buf.String()
}
// convertArgs accepts a slice of arguments and returns a slice of the same
// length with each argument converted to a spew Formatter interface using
// the ConfigState associated with s.
func (c *ConfigState) convertArgs(args []interface{}) (formatters []interface{}) {
formatters = make([]interface{}, len(args))
for index, arg := range args {
formatters[index] = newFormatter(c, arg)
}
return formatters
}
// NewDefaultConfig returns a ConfigState with the following default settings.
//
// Indent: " "
// MaxDepth: 0
// DisableMethods: false
// DisablePointerMethods: false
// ContinueOnMethod: false
// SortKeys: false
func NewDefaultConfig() *ConfigState {
return &ConfigState{Indent: " "}
}

211
vendor/github.com/davecgh/go-spew/spew/doc.go generated vendored Normal file
View file

@ -0,0 +1,211 @@
/*
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
/*
Package spew implements a deep pretty printer for Go data structures to aid in
debugging.
A quick overview of the additional features spew provides over the built-in
printing facilities for Go data types are as follows:
* Pointers are dereferenced and followed
* Circular data structures are detected and handled properly
* Custom Stringer/error interfaces are optionally invoked, including
on unexported types
* Custom types which only implement the Stringer/error interfaces via
a pointer receiver are optionally invoked when passing non-pointer
variables
* Byte arrays and slices are dumped like the hexdump -C command which
includes offsets, byte values in hex, and ASCII output (only when using
Dump style)
There are two different approaches spew allows for dumping Go data structures:
* Dump style which prints with newlines, customizable indentation,
and additional debug information such as types and all pointer addresses
used to indirect to the final value
* A custom Formatter interface that integrates cleanly with the standard fmt
package and replaces %v, %+v, %#v, and %#+v to provide inline printing
similar to the default %v while providing the additional functionality
outlined above and passing unsupported format verbs such as %x and %q
along to fmt
Quick Start
This section demonstrates how to quickly get started with spew. See the
sections below for further details on formatting and configuration options.
To dump a variable with full newlines, indentation, type, and pointer
information use Dump, Fdump, or Sdump:
spew.Dump(myVar1, myVar2, ...)
spew.Fdump(someWriter, myVar1, myVar2, ...)
str := spew.Sdump(myVar1, myVar2, ...)
Alternatively, if you would prefer to use format strings with a compacted inline
printing style, use the convenience wrappers Printf, Fprintf, etc with
%v (most compact), %+v (adds pointer addresses), %#v (adds types), or
%#+v (adds types and pointer addresses):
spew.Printf("myVar1: %v -- myVar2: %+v", myVar1, myVar2)
spew.Printf("myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
spew.Fprintf(someWriter, "myVar1: %v -- myVar2: %+v", myVar1, myVar2)
spew.Fprintf(someWriter, "myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
Configuration Options
Configuration of spew is handled by fields in the ConfigState type. For
convenience, all of the top-level functions use a global state available
via the spew.Config global.
It is also possible to create a ConfigState instance that provides methods
equivalent to the top-level functions. This allows concurrent configuration
options. See the ConfigState documentation for more details.
The following configuration options are available:
* Indent
String to use for each indentation level for Dump functions.
It is a single space by default. A popular alternative is "\t".
* MaxDepth
Maximum number of levels to descend into nested data structures.
There is no limit by default.
* DisableMethods
Disables invocation of error and Stringer interface methods.
Method invocation is enabled by default.
* DisablePointerMethods
Disables invocation of error and Stringer interface methods on types
which only accept pointer receivers from non-pointer variables.
Pointer method invocation is enabled by default.
* DisablePointerAddresses
DisablePointerAddresses specifies whether to disable the printing of
pointer addresses. This is useful when diffing data structures in tests.
* DisableCapacities
DisableCapacities specifies whether to disable the printing of
capacities for arrays, slices, maps and channels. This is useful when
diffing data structures in tests.
* ContinueOnMethod
Enables recursion into types after invoking error and Stringer interface
methods. Recursion after method invocation is disabled by default.
* SortKeys
Specifies map keys should be sorted before being printed. Use
this to have a more deterministic, diffable output. Note that
only native types (bool, int, uint, floats, uintptr and string)
and types which implement error or Stringer interfaces are
supported with other types sorted according to the
reflect.Value.String() output which guarantees display
stability. Natural map order is used by default.
* SpewKeys
Specifies that, as a last resort attempt, map keys should be
spewed to strings and sorted by those strings. This is only
considered if SortKeys is true.
Dump Usage
Simply call spew.Dump with a list of variables you want to dump:
spew.Dump(myVar1, myVar2, ...)
You may also call spew.Fdump if you would prefer to output to an arbitrary
io.Writer. For example, to dump to standard error:
spew.Fdump(os.Stderr, myVar1, myVar2, ...)
A third option is to call spew.Sdump to get the formatted output as a string:
str := spew.Sdump(myVar1, myVar2, ...)
Sample Dump Output
See the Dump example for details on the setup of the types and variables being
shown here.
(main.Foo) {
unexportedField: (*main.Bar)(0xf84002e210)({
flag: (main.Flag) flagTwo,
data: (uintptr) <nil>
}),
ExportedField: (map[interface {}]interface {}) (len=1) {
(string) (len=3) "one": (bool) true
}
}
Byte (and uint8) arrays and slices are displayed uniquely like the hexdump -C
command as shown.
([]uint8) (len=32 cap=32) {
00000000 11 12 13 14 15 16 17 18 19 1a 1b 1c 1d 1e 1f 20 |............... |
00000010 21 22 23 24 25 26 27 28 29 2a 2b 2c 2d 2e 2f 30 |!"#$%&'()*+,-./0|
00000020 31 32 |12|
}
Custom Formatter
Spew provides a custom formatter that implements the fmt.Formatter interface
so that it integrates cleanly with standard fmt package printing functions. The
formatter is useful for inline printing of smaller data types similar to the
standard %v format specifier.
The custom formatter only responds to the %v (most compact), %+v (adds pointer
addresses), %#v (adds types), or %#+v (adds types and pointer addresses) verb
combinations. Any other verbs such as %x and %q will be sent to the the
standard fmt package for formatting. In addition, the custom formatter ignores
the width and precision arguments (however they will still work on the format
specifiers not handled by the custom formatter).
Custom Formatter Usage
The simplest way to make use of the spew custom formatter is to call one of the
convenience functions such as spew.Printf, spew.Println, or spew.Printf. The
functions have syntax you are most likely already familiar with:
spew.Printf("myVar1: %v -- myVar2: %+v", myVar1, myVar2)
spew.Printf("myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
spew.Println(myVar, myVar2)
spew.Fprintf(os.Stderr, "myVar1: %v -- myVar2: %+v", myVar1, myVar2)
spew.Fprintf(os.Stderr, "myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
See the Index for the full list convenience functions.
Sample Formatter Output
Double pointer to a uint8:
%v: <**>5
%+v: <**>(0xf8400420d0->0xf8400420c8)5
%#v: (**uint8)5
%#+v: (**uint8)(0xf8400420d0->0xf8400420c8)5
Pointer to circular struct with a uint8 field and a pointer to itself:
%v: <*>{1 <*><shown>}
%+v: <*>(0xf84003e260){ui8:1 c:<*>(0xf84003e260)<shown>}
%#v: (*main.circular){ui8:(uint8)1 c:(*main.circular)<shown>}
%#+v: (*main.circular)(0xf84003e260){ui8:(uint8)1 c:(*main.circular)(0xf84003e260)<shown>}
See the Printf example for details on the setup of variables being shown
here.
Errors
Since it is possible for custom Stringer/error interfaces to panic, spew
detects them and handles them internally by printing the panic information
inline with the output. Since spew is intended to provide deep pretty printing
capabilities on structures, it intentionally does not return any errors.
*/
package spew

509
vendor/github.com/davecgh/go-spew/spew/dump.go generated vendored Normal file
View file

@ -0,0 +1,509 @@
/*
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package spew
import (
"bytes"
"encoding/hex"
"fmt"
"io"
"os"
"reflect"
"regexp"
"strconv"
"strings"
)
var (
// uint8Type is a reflect.Type representing a uint8. It is used to
// convert cgo types to uint8 slices for hexdumping.
uint8Type = reflect.TypeOf(uint8(0))
// cCharRE is a regular expression that matches a cgo char.
// It is used to detect character arrays to hexdump them.
cCharRE = regexp.MustCompile("^.*\\._Ctype_char$")
// cUnsignedCharRE is a regular expression that matches a cgo unsigned
// char. It is used to detect unsigned character arrays to hexdump
// them.
cUnsignedCharRE = regexp.MustCompile("^.*\\._Ctype_unsignedchar$")
// cUint8tCharRE is a regular expression that matches a cgo uint8_t.
// It is used to detect uint8_t arrays to hexdump them.
cUint8tCharRE = regexp.MustCompile("^.*\\._Ctype_uint8_t$")
)
// dumpState contains information about the state of a dump operation.
type dumpState struct {
w io.Writer
depth int
pointers map[uintptr]int
ignoreNextType bool
ignoreNextIndent bool
cs *ConfigState
}
// indent performs indentation according to the depth level and cs.Indent
// option.
func (d *dumpState) indent() {
if d.ignoreNextIndent {
d.ignoreNextIndent = false
return
}
d.w.Write(bytes.Repeat([]byte(d.cs.Indent), d.depth))
}
// unpackValue returns values inside of non-nil interfaces when possible.
// This is useful for data types like structs, arrays, slices, and maps which
// can contain varying types packed inside an interface.
func (d *dumpState) unpackValue(v reflect.Value) reflect.Value {
if v.Kind() == reflect.Interface && !v.IsNil() {
v = v.Elem()
}
return v
}
// dumpPtr handles formatting of pointers by indirecting them as necessary.
func (d *dumpState) dumpPtr(v reflect.Value) {
// Remove pointers at or below the current depth from map used to detect
// circular refs.
for k, depth := range d.pointers {
if depth >= d.depth {
delete(d.pointers, k)
}
}
// Keep list of all dereferenced pointers to show later.
pointerChain := make([]uintptr, 0)
// Figure out how many levels of indirection there are by dereferencing
// pointers and unpacking interfaces down the chain while detecting circular
// references.
nilFound := false
cycleFound := false
indirects := 0
ve := v
for ve.Kind() == reflect.Ptr {
if ve.IsNil() {
nilFound = true
break
}
indirects++
addr := ve.Pointer()
pointerChain = append(pointerChain, addr)
if pd, ok := d.pointers[addr]; ok && pd < d.depth {
cycleFound = true
indirects--
break
}
d.pointers[addr] = d.depth
ve = ve.Elem()
if ve.Kind() == reflect.Interface {
if ve.IsNil() {
nilFound = true
break
}
ve = ve.Elem()
}
}
// Display type information.
d.w.Write(openParenBytes)
d.w.Write(bytes.Repeat(asteriskBytes, indirects))
d.w.Write([]byte(ve.Type().String()))
d.w.Write(closeParenBytes)
// Display pointer information.
if !d.cs.DisablePointerAddresses && len(pointerChain) > 0 {
d.w.Write(openParenBytes)
for i, addr := range pointerChain {
if i > 0 {
d.w.Write(pointerChainBytes)
}
printHexPtr(d.w, addr)
}
d.w.Write(closeParenBytes)
}
// Display dereferenced value.
d.w.Write(openParenBytes)
switch {
case nilFound == true:
d.w.Write(nilAngleBytes)
case cycleFound == true:
d.w.Write(circularBytes)
default:
d.ignoreNextType = true
d.dump(ve)
}
d.w.Write(closeParenBytes)
}
// dumpSlice handles formatting of arrays and slices. Byte (uint8 under
// reflection) arrays and slices are dumped in hexdump -C fashion.
func (d *dumpState) dumpSlice(v reflect.Value) {
// Determine whether this type should be hex dumped or not. Also,
// for types which should be hexdumped, try to use the underlying data
// first, then fall back to trying to convert them to a uint8 slice.
var buf []uint8
doConvert := false
doHexDump := false
numEntries := v.Len()
if numEntries > 0 {
vt := v.Index(0).Type()
vts := vt.String()
switch {
// C types that need to be converted.
case cCharRE.MatchString(vts):
fallthrough
case cUnsignedCharRE.MatchString(vts):
fallthrough
case cUint8tCharRE.MatchString(vts):
doConvert = true
// Try to use existing uint8 slices and fall back to converting
// and copying if that fails.
case vt.Kind() == reflect.Uint8:
// We need an addressable interface to convert the type
// to a byte slice. However, the reflect package won't
// give us an interface on certain things like
// unexported struct fields in order to enforce
// visibility rules. We use unsafe, when available, to
// bypass these restrictions since this package does not
// mutate the values.
vs := v
if !vs.CanInterface() || !vs.CanAddr() {
vs = unsafeReflectValue(vs)
}
if !UnsafeDisabled {
vs = vs.Slice(0, numEntries)
// Use the existing uint8 slice if it can be
// type asserted.
iface := vs.Interface()
if slice, ok := iface.([]uint8); ok {
buf = slice
doHexDump = true
break
}
}
// The underlying data needs to be converted if it can't
// be type asserted to a uint8 slice.
doConvert = true
}
// Copy and convert the underlying type if needed.
if doConvert && vt.ConvertibleTo(uint8Type) {
// Convert and copy each element into a uint8 byte
// slice.
buf = make([]uint8, numEntries)
for i := 0; i < numEntries; i++ {
vv := v.Index(i)
buf[i] = uint8(vv.Convert(uint8Type).Uint())
}
doHexDump = true
}
}
// Hexdump the entire slice as needed.
if doHexDump {
indent := strings.Repeat(d.cs.Indent, d.depth)
str := indent + hex.Dump(buf)
str = strings.Replace(str, "\n", "\n"+indent, -1)
str = strings.TrimRight(str, d.cs.Indent)
d.w.Write([]byte(str))
return
}
// Recursively call dump for each item.
for i := 0; i < numEntries; i++ {
d.dump(d.unpackValue(v.Index(i)))
if i < (numEntries - 1) {
d.w.Write(commaNewlineBytes)
} else {
d.w.Write(newlineBytes)
}
}
}
// dump is the main workhorse for dumping a value. It uses the passed reflect
// value to figure out what kind of object we are dealing with and formats it
// appropriately. It is a recursive function, however circular data structures
// are detected and handled properly.
func (d *dumpState) dump(v reflect.Value) {
// Handle invalid reflect values immediately.
kind := v.Kind()
if kind == reflect.Invalid {
d.w.Write(invalidAngleBytes)
return
}
// Handle pointers specially.
if kind == reflect.Ptr {
d.indent()
d.dumpPtr(v)
return
}
// Print type information unless already handled elsewhere.
if !d.ignoreNextType {
d.indent()
d.w.Write(openParenBytes)
d.w.Write([]byte(v.Type().String()))
d.w.Write(closeParenBytes)
d.w.Write(spaceBytes)
}
d.ignoreNextType = false
// Display length and capacity if the built-in len and cap functions
// work with the value's kind and the len/cap itself is non-zero.
valueLen, valueCap := 0, 0
switch v.Kind() {
case reflect.Array, reflect.Slice, reflect.Chan:
valueLen, valueCap = v.Len(), v.Cap()
case reflect.Map, reflect.String:
valueLen = v.Len()
}
if valueLen != 0 || !d.cs.DisableCapacities && valueCap != 0 {
d.w.Write(openParenBytes)
if valueLen != 0 {
d.w.Write(lenEqualsBytes)
printInt(d.w, int64(valueLen), 10)
}
if !d.cs.DisableCapacities && valueCap != 0 {
if valueLen != 0 {
d.w.Write(spaceBytes)
}
d.w.Write(capEqualsBytes)
printInt(d.w, int64(valueCap), 10)
}
d.w.Write(closeParenBytes)
d.w.Write(spaceBytes)
}
// Call Stringer/error interfaces if they exist and the handle methods flag
// is enabled
if !d.cs.DisableMethods {
if (kind != reflect.Invalid) && (kind != reflect.Interface) {
if handled := handleMethods(d.cs, d.w, v); handled {
return
}
}
}
switch kind {
case reflect.Invalid:
// Do nothing. We should never get here since invalid has already
// been handled above.
case reflect.Bool:
printBool(d.w, v.Bool())
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
printInt(d.w, v.Int(), 10)
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
printUint(d.w, v.Uint(), 10)
case reflect.Float32:
printFloat(d.w, v.Float(), 32)
case reflect.Float64:
printFloat(d.w, v.Float(), 64)
case reflect.Complex64:
printComplex(d.w, v.Complex(), 32)
case reflect.Complex128:
printComplex(d.w, v.Complex(), 64)
case reflect.Slice:
if v.IsNil() {
d.w.Write(nilAngleBytes)
break
}
fallthrough
case reflect.Array:
d.w.Write(openBraceNewlineBytes)
d.depth++
if (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) {
d.indent()
d.w.Write(maxNewlineBytes)
} else {
d.dumpSlice(v)
}
d.depth--
d.indent()
d.w.Write(closeBraceBytes)
case reflect.String:
d.w.Write([]byte(strconv.Quote(v.String())))
case reflect.Interface:
// The only time we should get here is for nil interfaces due to
// unpackValue calls.
if v.IsNil() {
d.w.Write(nilAngleBytes)
}
case reflect.Ptr:
// Do nothing. We should never get here since pointers have already
// been handled above.
case reflect.Map:
// nil maps should be indicated as different than empty maps
if v.IsNil() {
d.w.Write(nilAngleBytes)
break
}
d.w.Write(openBraceNewlineBytes)
d.depth++
if (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) {
d.indent()
d.w.Write(maxNewlineBytes)
} else {
numEntries := v.Len()
keys := v.MapKeys()
if d.cs.SortKeys {
sortValues(keys, d.cs)
}
for i, key := range keys {
d.dump(d.unpackValue(key))
d.w.Write(colonSpaceBytes)
d.ignoreNextIndent = true
d.dump(d.unpackValue(v.MapIndex(key)))
if i < (numEntries - 1) {
d.w.Write(commaNewlineBytes)
} else {
d.w.Write(newlineBytes)
}
}
}
d.depth--
d.indent()
d.w.Write(closeBraceBytes)
case reflect.Struct:
d.w.Write(openBraceNewlineBytes)
d.depth++
if (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) {
d.indent()
d.w.Write(maxNewlineBytes)
} else {
vt := v.Type()
numFields := v.NumField()
for i := 0; i < numFields; i++ {
d.indent()
vtf := vt.Field(i)
d.w.Write([]byte(vtf.Name))
d.w.Write(colonSpaceBytes)
d.ignoreNextIndent = true
d.dump(d.unpackValue(v.Field(i)))
if i < (numFields - 1) {
d.w.Write(commaNewlineBytes)
} else {
d.w.Write(newlineBytes)
}
}
}
d.depth--
d.indent()
d.w.Write(closeBraceBytes)
case reflect.Uintptr:
printHexPtr(d.w, uintptr(v.Uint()))
case reflect.UnsafePointer, reflect.Chan, reflect.Func:
printHexPtr(d.w, v.Pointer())
// There were not any other types at the time this code was written, but
// fall back to letting the default fmt package handle it in case any new
// types are added.
default:
if v.CanInterface() {
fmt.Fprintf(d.w, "%v", v.Interface())
} else {
fmt.Fprintf(d.w, "%v", v.String())
}
}
}
// fdump is a helper function to consolidate the logic from the various public
// methods which take varying writers and config states.
func fdump(cs *ConfigState, w io.Writer, a ...interface{}) {
for _, arg := range a {
if arg == nil {
w.Write(interfaceBytes)
w.Write(spaceBytes)
w.Write(nilAngleBytes)
w.Write(newlineBytes)
continue
}
d := dumpState{w: w, cs: cs}
d.pointers = make(map[uintptr]int)
d.dump(reflect.ValueOf(arg))
d.w.Write(newlineBytes)
}
}
// Fdump formats and displays the passed arguments to io.Writer w. It formats
// exactly the same as Dump.
func Fdump(w io.Writer, a ...interface{}) {
fdump(&Config, w, a...)
}
// Sdump returns a string with the passed arguments formatted exactly the same
// as Dump.
func Sdump(a ...interface{}) string {
var buf bytes.Buffer
fdump(&Config, &buf, a...)
return buf.String()
}
/*
Dump displays the passed parameters to standard out with newlines, customizable
indentation, and additional debug information such as complete types and all
pointer addresses used to indirect to the final value. It provides the
following features over the built-in printing facilities provided by the fmt
package:
* Pointers are dereferenced and followed
* Circular data structures are detected and handled properly
* Custom Stringer/error interfaces are optionally invoked, including
on unexported types
* Custom types which only implement the Stringer/error interfaces via
a pointer receiver are optionally invoked when passing non-pointer
variables
* Byte arrays and slices are dumped like the hexdump -C command which
includes offsets, byte values in hex, and ASCII output
The configuration options are controlled by an exported package global,
spew.Config. See ConfigState for options documentation.
See Fdump if you would prefer dumping to an arbitrary io.Writer or Sdump to
get the formatted result as a string.
*/
func Dump(a ...interface{}) {
fdump(&Config, os.Stdout, a...)
}

419
vendor/github.com/davecgh/go-spew/spew/format.go generated vendored Normal file
View file

@ -0,0 +1,419 @@
/*
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package spew
import (
"bytes"
"fmt"
"reflect"
"strconv"
"strings"
)
// supportedFlags is a list of all the character flags supported by fmt package.
const supportedFlags = "0-+# "
// formatState implements the fmt.Formatter interface and contains information
// about the state of a formatting operation. The NewFormatter function can
// be used to get a new Formatter which can be used directly as arguments
// in standard fmt package printing calls.
type formatState struct {
value interface{}
fs fmt.State
depth int
pointers map[uintptr]int
ignoreNextType bool
cs *ConfigState
}
// buildDefaultFormat recreates the original format string without precision
// and width information to pass in to fmt.Sprintf in the case of an
// unrecognized type. Unless new types are added to the language, this
// function won't ever be called.
func (f *formatState) buildDefaultFormat() (format string) {
buf := bytes.NewBuffer(percentBytes)
for _, flag := range supportedFlags {
if f.fs.Flag(int(flag)) {
buf.WriteRune(flag)
}
}
buf.WriteRune('v')
format = buf.String()
return format
}
// constructOrigFormat recreates the original format string including precision
// and width information to pass along to the standard fmt package. This allows
// automatic deferral of all format strings this package doesn't support.
func (f *formatState) constructOrigFormat(verb rune) (format string) {
buf := bytes.NewBuffer(percentBytes)
for _, flag := range supportedFlags {
if f.fs.Flag(int(flag)) {
buf.WriteRune(flag)
}
}
if width, ok := f.fs.Width(); ok {
buf.WriteString(strconv.Itoa(width))
}
if precision, ok := f.fs.Precision(); ok {
buf.Write(precisionBytes)
buf.WriteString(strconv.Itoa(precision))
}
buf.WriteRune(verb)
format = buf.String()
return format
}
// unpackValue returns values inside of non-nil interfaces when possible and
// ensures that types for values which have been unpacked from an interface
// are displayed when the show types flag is also set.
// This is useful for data types like structs, arrays, slices, and maps which
// can contain varying types packed inside an interface.
func (f *formatState) unpackValue(v reflect.Value) reflect.Value {
if v.Kind() == reflect.Interface {
f.ignoreNextType = false
if !v.IsNil() {
v = v.Elem()
}
}
return v
}
// formatPtr handles formatting of pointers by indirecting them as necessary.
func (f *formatState) formatPtr(v reflect.Value) {
// Display nil if top level pointer is nil.
showTypes := f.fs.Flag('#')
if v.IsNil() && (!showTypes || f.ignoreNextType) {
f.fs.Write(nilAngleBytes)
return
}
// Remove pointers at or below the current depth from map used to detect
// circular refs.
for k, depth := range f.pointers {
if depth >= f.depth {
delete(f.pointers, k)
}
}
// Keep list of all dereferenced pointers to possibly show later.
pointerChain := make([]uintptr, 0)
// Figure out how many levels of indirection there are by derferencing
// pointers and unpacking interfaces down the chain while detecting circular
// references.
nilFound := false
cycleFound := false
indirects := 0
ve := v
for ve.Kind() == reflect.Ptr {
if ve.IsNil() {
nilFound = true
break
}
indirects++
addr := ve.Pointer()
pointerChain = append(pointerChain, addr)
if pd, ok := f.pointers[addr]; ok && pd < f.depth {
cycleFound = true
indirects--
break
}
f.pointers[addr] = f.depth
ve = ve.Elem()
if ve.Kind() == reflect.Interface {
if ve.IsNil() {
nilFound = true
break
}
ve = ve.Elem()
}
}
// Display type or indirection level depending on flags.
if showTypes && !f.ignoreNextType {
f.fs.Write(openParenBytes)
f.fs.Write(bytes.Repeat(asteriskBytes, indirects))
f.fs.Write([]byte(ve.Type().String()))
f.fs.Write(closeParenBytes)
} else {
if nilFound || cycleFound {
indirects += strings.Count(ve.Type().String(), "*")
}
f.fs.Write(openAngleBytes)
f.fs.Write([]byte(strings.Repeat("*", indirects)))
f.fs.Write(closeAngleBytes)
}
// Display pointer information depending on flags.
if f.fs.Flag('+') && (len(pointerChain) > 0) {
f.fs.Write(openParenBytes)
for i, addr := range pointerChain {
if i > 0 {
f.fs.Write(pointerChainBytes)
}
printHexPtr(f.fs, addr)
}
f.fs.Write(closeParenBytes)
}
// Display dereferenced value.
switch {
case nilFound == true:
f.fs.Write(nilAngleBytes)
case cycleFound == true:
f.fs.Write(circularShortBytes)
default:
f.ignoreNextType = true
f.format(ve)
}
}
// format is the main workhorse for providing the Formatter interface. It
// uses the passed reflect value to figure out what kind of object we are
// dealing with and formats it appropriately. It is a recursive function,
// however circular data structures are detected and handled properly.
func (f *formatState) format(v reflect.Value) {
// Handle invalid reflect values immediately.
kind := v.Kind()
if kind == reflect.Invalid {
f.fs.Write(invalidAngleBytes)
return
}
// Handle pointers specially.
if kind == reflect.Ptr {
f.formatPtr(v)
return
}
// Print type information unless already handled elsewhere.
if !f.ignoreNextType && f.fs.Flag('#') {
f.fs.Write(openParenBytes)
f.fs.Write([]byte(v.Type().String()))
f.fs.Write(closeParenBytes)
}
f.ignoreNextType = false
// Call Stringer/error interfaces if they exist and the handle methods
// flag is enabled.
if !f.cs.DisableMethods {
if (kind != reflect.Invalid) && (kind != reflect.Interface) {
if handled := handleMethods(f.cs, f.fs, v); handled {
return
}
}
}
switch kind {
case reflect.Invalid:
// Do nothing. We should never get here since invalid has already
// been handled above.
case reflect.Bool:
printBool(f.fs, v.Bool())
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
printInt(f.fs, v.Int(), 10)
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
printUint(f.fs, v.Uint(), 10)
case reflect.Float32:
printFloat(f.fs, v.Float(), 32)
case reflect.Float64:
printFloat(f.fs, v.Float(), 64)
case reflect.Complex64:
printComplex(f.fs, v.Complex(), 32)
case reflect.Complex128:
printComplex(f.fs, v.Complex(), 64)
case reflect.Slice:
if v.IsNil() {
f.fs.Write(nilAngleBytes)
break
}
fallthrough
case reflect.Array:
f.fs.Write(openBracketBytes)
f.depth++
if (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) {
f.fs.Write(maxShortBytes)
} else {
numEntries := v.Len()
for i := 0; i < numEntries; i++ {
if i > 0 {
f.fs.Write(spaceBytes)
}
f.ignoreNextType = true
f.format(f.unpackValue(v.Index(i)))
}
}
f.depth--
f.fs.Write(closeBracketBytes)
case reflect.String:
f.fs.Write([]byte(v.String()))
case reflect.Interface:
// The only time we should get here is for nil interfaces due to
// unpackValue calls.
if v.IsNil() {
f.fs.Write(nilAngleBytes)
}
case reflect.Ptr:
// Do nothing. We should never get here since pointers have already
// been handled above.
case reflect.Map:
// nil maps should be indicated as different than empty maps
if v.IsNil() {
f.fs.Write(nilAngleBytes)
break
}
f.fs.Write(openMapBytes)
f.depth++
if (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) {
f.fs.Write(maxShortBytes)
} else {
keys := v.MapKeys()
if f.cs.SortKeys {
sortValues(keys, f.cs)
}
for i, key := range keys {
if i > 0 {
f.fs.Write(spaceBytes)
}
f.ignoreNextType = true
f.format(f.unpackValue(key))
f.fs.Write(colonBytes)
f.ignoreNextType = true
f.format(f.unpackValue(v.MapIndex(key)))
}
}
f.depth--
f.fs.Write(closeMapBytes)
case reflect.Struct:
numFields := v.NumField()
f.fs.Write(openBraceBytes)
f.depth++
if (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) {
f.fs.Write(maxShortBytes)
} else {
vt := v.Type()
for i := 0; i < numFields; i++ {
if i > 0 {
f.fs.Write(spaceBytes)
}
vtf := vt.Field(i)
if f.fs.Flag('+') || f.fs.Flag('#') {
f.fs.Write([]byte(vtf.Name))
f.fs.Write(colonBytes)
}
f.format(f.unpackValue(v.Field(i)))
}
}
f.depth--
f.fs.Write(closeBraceBytes)
case reflect.Uintptr:
printHexPtr(f.fs, uintptr(v.Uint()))
case reflect.UnsafePointer, reflect.Chan, reflect.Func:
printHexPtr(f.fs, v.Pointer())
// There were not any other types at the time this code was written, but
// fall back to letting the default fmt package handle it if any get added.
default:
format := f.buildDefaultFormat()
if v.CanInterface() {
fmt.Fprintf(f.fs, format, v.Interface())
} else {
fmt.Fprintf(f.fs, format, v.String())
}
}
}
// Format satisfies the fmt.Formatter interface. See NewFormatter for usage
// details.
func (f *formatState) Format(fs fmt.State, verb rune) {
f.fs = fs
// Use standard formatting for verbs that are not v.
if verb != 'v' {
format := f.constructOrigFormat(verb)
fmt.Fprintf(fs, format, f.value)
return
}
if f.value == nil {
if fs.Flag('#') {
fs.Write(interfaceBytes)
}
fs.Write(nilAngleBytes)
return
}
f.format(reflect.ValueOf(f.value))
}
// newFormatter is a helper function to consolidate the logic from the various
// public methods which take varying config states.
func newFormatter(cs *ConfigState, v interface{}) fmt.Formatter {
fs := &formatState{value: v, cs: cs}
fs.pointers = make(map[uintptr]int)
return fs
}
/*
NewFormatter returns a custom formatter that satisfies the fmt.Formatter
interface. As a result, it integrates cleanly with standard fmt package
printing functions. The formatter is useful for inline printing of smaller data
types similar to the standard %v format specifier.
The custom formatter only responds to the %v (most compact), %+v (adds pointer
addresses), %#v (adds types), or %#+v (adds types and pointer addresses) verb
combinations. Any other verbs such as %x and %q will be sent to the the
standard fmt package for formatting. In addition, the custom formatter ignores
the width and precision arguments (however they will still work on the format
specifiers not handled by the custom formatter).
Typically this function shouldn't be called directly. It is much easier to make
use of the custom formatter by calling one of the convenience functions such as
Printf, Println, or Fprintf.
*/
func NewFormatter(v interface{}) fmt.Formatter {
return newFormatter(&Config, v)
}

148
vendor/github.com/davecgh/go-spew/spew/spew.go generated vendored Normal file
View file

@ -0,0 +1,148 @@
/*
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package spew
import (
"fmt"
"io"
)
// Errorf is a wrapper for fmt.Errorf that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the formatted string as a value that satisfies error. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Errorf(format, spew.NewFormatter(a), spew.NewFormatter(b))
func Errorf(format string, a ...interface{}) (err error) {
return fmt.Errorf(format, convertArgs(a)...)
}
// Fprint is a wrapper for fmt.Fprint that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprint(w, spew.NewFormatter(a), spew.NewFormatter(b))
func Fprint(w io.Writer, a ...interface{}) (n int, err error) {
return fmt.Fprint(w, convertArgs(a)...)
}
// Fprintf is a wrapper for fmt.Fprintf that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprintf(w, format, spew.NewFormatter(a), spew.NewFormatter(b))
func Fprintf(w io.Writer, format string, a ...interface{}) (n int, err error) {
return fmt.Fprintf(w, format, convertArgs(a)...)
}
// Fprintln is a wrapper for fmt.Fprintln that treats each argument as if it
// passed with a default Formatter interface returned by NewFormatter. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprintln(w, spew.NewFormatter(a), spew.NewFormatter(b))
func Fprintln(w io.Writer, a ...interface{}) (n int, err error) {
return fmt.Fprintln(w, convertArgs(a)...)
}
// Print is a wrapper for fmt.Print that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Print(spew.NewFormatter(a), spew.NewFormatter(b))
func Print(a ...interface{}) (n int, err error) {
return fmt.Print(convertArgs(a)...)
}
// Printf is a wrapper for fmt.Printf that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Printf(format, spew.NewFormatter(a), spew.NewFormatter(b))
func Printf(format string, a ...interface{}) (n int, err error) {
return fmt.Printf(format, convertArgs(a)...)
}
// Println is a wrapper for fmt.Println that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Println(spew.NewFormatter(a), spew.NewFormatter(b))
func Println(a ...interface{}) (n int, err error) {
return fmt.Println(convertArgs(a)...)
}
// Sprint is a wrapper for fmt.Sprint that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprint(spew.NewFormatter(a), spew.NewFormatter(b))
func Sprint(a ...interface{}) string {
return fmt.Sprint(convertArgs(a)...)
}
// Sprintf is a wrapper for fmt.Sprintf that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprintf(format, spew.NewFormatter(a), spew.NewFormatter(b))
func Sprintf(format string, a ...interface{}) string {
return fmt.Sprintf(format, convertArgs(a)...)
}
// Sprintln is a wrapper for fmt.Sprintln that treats each argument as if it
// were passed with a default Formatter interface returned by NewFormatter. It
// returns the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprintln(spew.NewFormatter(a), spew.NewFormatter(b))
func Sprintln(a ...interface{}) string {
return fmt.Sprintln(convertArgs(a)...)
}
// convertArgs accepts a slice of arguments and returns a slice of the same
// length with each argument converted to a default spew Formatter interface.
func convertArgs(args []interface{}) (formatters []interface{}) {
formatters = make([]interface{}, len(args))
for index, arg := range args {
formatters[index] = NewFormatter(arg)
}
return formatters
}

202
vendor/github.com/drone/drone-go/LICENSE generated vendored Normal file
View file

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

128
vendor/github.com/drone/drone-go/template/template.go generated vendored Normal file
View file

@ -0,0 +1,128 @@
package template
import (
"fmt"
"io/ioutil"
"net/http"
"net/url"
"strings"
"time"
"unicode"
"unicode/utf8"
"github.com/aymerick/raymond"
)
func init() {
raymond.RegisterHelpers(funcs)
}
// Render parses and executes a template, returning the results in string format.
func Render(template string, payload interface{}) (s string, err error) {
u, err := url.Parse(template)
if err == nil {
switch u.Scheme {
case "http", "https":
res, err := http.Get(template)
if err != nil {
return s, err
}
defer res.Body.Close()
out, err := ioutil.ReadAll(res.Body)
if err != nil {
return s, err
}
template = string(out)
case "file":
out, err := ioutil.ReadFile(u.Path)
if err != nil {
return s, err
}
template = string(out)
}
}
return raymond.Render(template, payload)
}
// RenderTrim parses and executes a template, returning the results in string
// format. The result is trimmed to remove left and right padding and newlines
// that may be added unintentially in the template markup.
func RenderTrim(template string, playload interface{}) (string, error) {
out, err := Render(template, playload)
return strings.Trim(out, " \n"), err
}
var funcs = map[string]interface{}{
"uppercasefirst": uppercaseFirst,
"uppercase": strings.ToUpper,
"lowercase": strings.ToLower,
"duration": toDuration,
"datetime": toDatetime,
"success": isSuccess,
"failure": isFailure,
"truncate": truncate,
"urlencode": urlencode,
}
func truncate(s string, len int) string {
if utf8.RuneCountInString(s) <= len {
return s
}
runes := []rune(s)
return string(runes[:len])
}
func uppercaseFirst(s string) string {
a := []rune(s)
a[0] = unicode.ToUpper(a[0])
s = string(a)
return s
}
func toDuration(started, finished int64) string {
return fmt.Sprintln(time.Duration(finished-started) * time.Second)
}
func toDatetime(timestamp int64, layout, zone string) string {
if len(zone) == 0 {
return time.Unix(int64(timestamp), 0).Format(layout)
}
loc, err := time.LoadLocation(zone)
if err != nil {
return time.Unix(int64(timestamp), 0).Local().Format(layout)
}
return time.Unix(int64(timestamp), 0).In(loc).Format(layout)
}
func isSuccess(conditional bool, options *raymond.Options) string {
if !conditional {
return options.Inverse()
}
switch options.ParamStr(0) {
case "success":
return options.Fn()
default:
return options.Inverse()
}
}
func isFailure(conditional bool, options *raymond.Options) string {
if !conditional {
return options.Inverse()
}
switch options.ParamStr(0) {
case "failure", "error", "killed":
return options.Fn()
default:
return options.Inverse()
}
}
func urlencode(options *raymond.Options) string {
return url.QueryEscape(options.Fn())
}

27
vendor/github.com/gorilla/css/LICENSE generated vendored Normal file
View file

@ -0,0 +1,27 @@
Copyright (c) 2013, Gorilla web toolkit
All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice, this
list of conditions and the following disclaimer in the documentation and/or
other materials provided with the distribution.
Neither the name of the {organization} nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

33
vendor/github.com/gorilla/css/scanner/doc.go generated vendored Normal file
View file

@ -0,0 +1,33 @@
// Copyright 2012 The Gorilla Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
/*
Package gorilla/css/scanner generates tokens for a CSS3 input.
It follows the CSS3 specification located at:
http://www.w3.org/TR/css3-syntax/
To use it, create a new scanner for a given CSS string and call Next() until
the token returned has type TokenEOF or TokenError:
s := scanner.New(myCSS)
for {
token := s.Next()
if token.Type == scanner.TokenEOF || token.Type == scanner.TokenError {
break
}
// Do something with the token...
}
Following the CSS3 specification, an error can only occur when the scanner
finds an unclosed quote or unclosed comment. In these cases the text becomes
"untokenizable". Everything else is tokenizable and it is up to a parser
to make sense of the token stream (or ignore nonsensical token sequences).
Note: the scanner doesn't perform lexical analysis or, in other words, it
doesn't care about the token context. It is intended to be used by a
lexer or parser.
*/
package scanner

348
vendor/github.com/gorilla/css/scanner/scanner.go generated vendored Normal file
View file

@ -0,0 +1,348 @@
// Copyright 2012 The Gorilla Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package scanner
import (
"fmt"
"regexp"
"strings"
"unicode"
"unicode/utf8"
)
// tokenType identifies the type of lexical tokens.
type tokenType int
// String returns a string representation of the token type.
func (t tokenType) String() string {
return tokenNames[t]
}
// Token represents a token and the corresponding string.
type Token struct {
Type tokenType
Value string
Line int
Column int
}
// String returns a string representation of the token.
func (t *Token) String() string {
if len(t.Value) > 10 {
return fmt.Sprintf("%s (line: %d, column: %d): %.10q...",
t.Type, t.Line, t.Column, t.Value)
}
return fmt.Sprintf("%s (line: %d, column: %d): %q",
t.Type, t.Line, t.Column, t.Value)
}
// All tokens -----------------------------------------------------------------
// The complete list of tokens in CSS3.
const (
// Scanner flags.
TokenError tokenType = iota
TokenEOF
// From now on, only tokens from the CSS specification.
TokenIdent
TokenAtKeyword
TokenString
TokenHash
TokenNumber
TokenPercentage
TokenDimension
TokenURI
TokenUnicodeRange
TokenCDO
TokenCDC
TokenS
TokenComment
TokenFunction
TokenIncludes
TokenDashMatch
TokenPrefixMatch
TokenSuffixMatch
TokenSubstringMatch
TokenChar
TokenBOM
)
// tokenNames maps tokenType's to their names. Used for conversion to string.
var tokenNames = map[tokenType]string{
TokenError: "error",
TokenEOF: "EOF",
TokenIdent: "IDENT",
TokenAtKeyword: "ATKEYWORD",
TokenString: "STRING",
TokenHash: "HASH",
TokenNumber: "NUMBER",
TokenPercentage: "PERCENTAGE",
TokenDimension: "DIMENSION",
TokenURI: "URI",
TokenUnicodeRange: "UNICODE-RANGE",
TokenCDO: "CDO",
TokenCDC: "CDC",
TokenS: "S",
TokenComment: "COMMENT",
TokenFunction: "FUNCTION",
TokenIncludes: "INCLUDES",
TokenDashMatch: "DASHMATCH",
TokenPrefixMatch: "PREFIXMATCH",
TokenSuffixMatch: "SUFFIXMATCH",
TokenSubstringMatch: "SUBSTRINGMATCH",
TokenChar: "CHAR",
TokenBOM: "BOM",
}
// Macros and productions -----------------------------------------------------
// http://www.w3.org/TR/css3-syntax/#tokenization
var macroRegexp = regexp.MustCompile(`\{[a-z]+\}`)
// macros maps macro names to patterns to be expanded.
var macros = map[string]string{
// must be escaped: `\.+*?()|[]{}^$`
"ident": `-?{nmstart}{nmchar}*`,
"name": `{nmchar}+`,
"nmstart": `[a-zA-Z_]|{nonascii}|{escape}`,
"nonascii": "[\u0080-\uD7FF\uE000-\uFFFD\U00010000-\U0010FFFF]",
"unicode": `\\[0-9a-fA-F]{1,6}{wc}?`,
"escape": "{unicode}|\\\\[\u0020-\u007E\u0080-\uD7FF\uE000-\uFFFD\U00010000-\U0010FFFF]",
"nmchar": `[a-zA-Z0-9_-]|{nonascii}|{escape}`,
"num": `[0-9]*\.[0-9]+|[0-9]+`,
"string": `"(?:{stringchar}|')*"|'(?:{stringchar}|")*'`,
"stringchar": `{urlchar}|[ ]|\\{nl}`,
"urlchar": "[\u0009\u0021\u0023-\u0026\u0027-\u007E]|{nonascii}|{escape}",
"nl": `[\n\r\f]|\r\n`,
"w": `{wc}*`,
"wc": `[\t\n\f\r ]`,
}
// productions maps the list of tokens to patterns to be expanded.
var productions = map[tokenType]string{
// Unused regexps (matched using other methods) are commented out.
TokenIdent: `{ident}`,
TokenAtKeyword: `@{ident}`,
TokenString: `{string}`,
TokenHash: `#{name}`,
TokenNumber: `{num}`,
TokenPercentage: `{num}%`,
TokenDimension: `{num}{ident}`,
TokenURI: `url\({w}(?:{string}|{urlchar}*){w}\)`,
TokenUnicodeRange: `U\+[0-9A-F\?]{1,6}(?:-[0-9A-F]{1,6})?`,
//TokenCDO: `<!--`,
TokenCDC: `-->`,
TokenS: `{wc}+`,
TokenComment: `/\*[^\*]*[\*]+(?:[^/][^\*]*[\*]+)*/`,
TokenFunction: `{ident}\(`,
//TokenIncludes: `~=`,
//TokenDashMatch: `\|=`,
//TokenPrefixMatch: `\^=`,
//TokenSuffixMatch: `\$=`,
//TokenSubstringMatch: `\*=`,
//TokenChar: `[^"']`,
//TokenBOM: "\uFEFF",
}
// matchers maps the list of tokens to compiled regular expressions.
//
// The map is filled on init() using the macros and productions defined in
// the CSS specification.
var matchers = map[tokenType]*regexp.Regexp{}
// matchOrder is the order to test regexps when first-char shortcuts
// can't be used.
var matchOrder = []tokenType{
TokenURI,
TokenFunction,
TokenUnicodeRange,
TokenIdent,
TokenDimension,
TokenPercentage,
TokenNumber,
TokenCDC,
}
func init() {
// replace macros and compile regexps for productions.
replaceMacro := func(s string) string {
return "(?:" + macros[s[1:len(s)-1]] + ")"
}
for t, s := range productions {
for macroRegexp.MatchString(s) {
s = macroRegexp.ReplaceAllStringFunc(s, replaceMacro)
}
matchers[t] = regexp.MustCompile("^(?:" + s + ")")
}
}
// Scanner --------------------------------------------------------------------
// New returns a new CSS scanner for the given input.
func New(input string) *Scanner {
// Normalize newlines.
input = strings.Replace(input, "\r\n", "\n", -1)
return &Scanner{
input: input,
row: 1,
col: 1,
}
}
// Scanner scans an input and emits tokens following the CSS3 specification.
type Scanner struct {
input string
pos int
row int
col int
err *Token
}
// Next returns the next token from the input.
//
// At the end of the input the token type is TokenEOF.
//
// If the input can't be tokenized the token type is TokenError. This occurs
// in case of unclosed quotation marks or comments.
func (s *Scanner) Next() *Token {
if s.err != nil {
return s.err
}
if s.pos >= len(s.input) {
s.err = &Token{TokenEOF, "", s.row, s.col}
return s.err
}
if s.pos == 0 {
// Test BOM only once, at the beginning of the file.
if strings.HasPrefix(s.input, "\uFEFF") {
return s.emitSimple(TokenBOM, "\uFEFF")
}
}
// There's a lot we can guess based on the first byte so we'll take a
// shortcut before testing multiple regexps.
input := s.input[s.pos:]
switch input[0] {
case '\t', '\n', '\f', '\r', ' ':
// Whitespace.
return s.emitToken(TokenS, matchers[TokenS].FindString(input))
case '.':
// Dot is too common to not have a quick check.
// We'll test if this is a Char; if it is followed by a number it is a
// dimension/percentage/number, and this will be matched later.
if len(input) > 1 && !unicode.IsDigit(rune(input[1])) {
return s.emitSimple(TokenChar, ".")
}
case '#':
// Another common one: Hash or Char.
if match := matchers[TokenHash].FindString(input); match != "" {
return s.emitToken(TokenHash, match)
}
return s.emitSimple(TokenChar, "#")
case '@':
// Another common one: AtKeyword or Char.
if match := matchers[TokenAtKeyword].FindString(input); match != "" {
return s.emitSimple(TokenAtKeyword, match)
}
return s.emitSimple(TokenChar, "@")
case ':', ',', ';', '%', '&', '+', '=', '>', '(', ')', '[', ']', '{', '}':
// More common chars.
return s.emitSimple(TokenChar, string(input[0]))
case '"', '\'':
// String or error.
match := matchers[TokenString].FindString(input)
if match != "" {
return s.emitToken(TokenString, match)
} else {
s.err = &Token{TokenError, "unclosed quotation mark", s.row, s.col}
return s.err
}
case '/':
// Comment, error or Char.
if len(input) > 1 && input[1] == '*' {
match := matchers[TokenComment].FindString(input)
if match != "" {
return s.emitToken(TokenComment, match)
} else {
s.err = &Token{TokenError, "unclosed comment", s.row, s.col}
return s.err
}
}
return s.emitSimple(TokenChar, "/")
case '~':
// Includes or Char.
return s.emitPrefixOrChar(TokenIncludes, "~=")
case '|':
// DashMatch or Char.
return s.emitPrefixOrChar(TokenDashMatch, "|=")
case '^':
// PrefixMatch or Char.
return s.emitPrefixOrChar(TokenPrefixMatch, "^=")
case '$':
// SuffixMatch or Char.
return s.emitPrefixOrChar(TokenSuffixMatch, "$=")
case '*':
// SubstringMatch or Char.
return s.emitPrefixOrChar(TokenSubstringMatch, "*=")
case '<':
// CDO or Char.
return s.emitPrefixOrChar(TokenCDO, "<!--")
}
// Test all regexps, in order.
for _, token := range matchOrder {
if match := matchers[token].FindString(input); match != "" {
return s.emitToken(token, match)
}
}
// We already handled unclosed quotation marks and comments,
// so this can only be a Char.
r, width := utf8.DecodeRuneInString(input)
token := &Token{TokenChar, string(r), s.row, s.col}
s.col += width
s.pos += width
return token
}
// updatePosition updates input coordinates based on the consumed text.
func (s *Scanner) updatePosition(text string) {
width := utf8.RuneCountInString(text)
lines := strings.Count(text, "\n")
s.row += lines
if lines == 0 {
s.col += width
} else {
s.col = utf8.RuneCountInString(text[strings.LastIndex(text, "\n"):])
}
s.pos += len(text) // while col is a rune index, pos is a byte index
}
// emitToken returns a Token for the string v and updates the scanner position.
func (s *Scanner) emitToken(t tokenType, v string) *Token {
token := &Token{t, v, s.row, s.col}
s.updatePosition(v)
return token
}
// emitSimple returns a Token for the string v and updates the scanner
// position in a simplified manner.
//
// The string is known to have only ASCII characters and to not have a newline.
func (s *Scanner) emitSimple(t tokenType, v string) *Token {
token := &Token{t, v, s.row, s.col}
s.col += len(v)
s.pos += len(v)
return token
}
// emitPrefixOrChar returns a Token for type t if the current position
// matches the given prefix. Otherwise it returns a Char token using the
// first character from the prefix.
//
// The prefix is known to have only ASCII characters and to not have a newline.
func (s *Scanner) emitPrefixOrChar(t tokenType, prefix string) *Token {
if strings.HasPrefix(s.input[s.pos:], prefix) {
return s.emitSimple(t, prefix)
}
return s.emitSimple(TokenChar, string(prefix[0]))
}

22
vendor/github.com/jaytaylor/html2text/LICENSE generated vendored Normal file
View file

@ -0,0 +1,22 @@
The MIT License (MIT)
Copyright (c) 2015 Jay Taylor
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

116
vendor/github.com/jaytaylor/html2text/README.md generated vendored Normal file
View file

@ -0,0 +1,116 @@
# html2text
[![Documentation](https://godoc.org/github.com/jaytaylor/html2text?status.svg)](https://godoc.org/github.com/jaytaylor/html2text)
[![Build Status](https://travis-ci.org/jaytaylor/html2text.svg?branch=master)](https://travis-ci.org/jaytaylor/html2text)
[![Report Card](https://goreportcard.com/badge/github.com/jaytaylor/html2text)](https://goreportcard.com/report/github.com/jaytaylor/html2text)
### Converts HTML into text
## Introduction
Ensure your emails are readable by all!
Turns HTML into raw text, useful for sending fancy HTML emails with a equivalently nicely formatted TXT document as a fallback (e.g. for people who don't allow HTML emails or have other display issues).
html2text is a simple golang package for rendering HTML into plaintext.
There are still lots of improvements to be had, but FWIW this has worked fine for my [basic] HTML-2-text needs.
It requires go 1.x or newer ;)
## Download the package
```bash
go get github.com/jaytaylor/html2text
```
## Example usage
```go
package main
import (
"fmt"
"github.com/jaytaylor/html2text"
)
func main() {
inputHtml := `
<html>
<head>
<title>My Mega Service</title>
<link rel=\"stylesheet\" href=\"main.css\">
<style type=\"text/css\">body { color: #fff; }</style>
</head>
<body>
<div class="logo">
<a href="http://mymegaservice.com/"><img src="/logo-image.jpg" alt="Mega Service"/></a>
</div>
<h1>Welcome to your new account on my service!</h1>
<p>
Here is some more information:
<ul>
<li>Link 1: <a href="https://example.com">Example.com</a></li>
<li>Link 2: <a href="https://example2.com">Example2.com</a></li>
<li>Something else</li>
</ul>
</p>
</body>
</html>
`
text, err := html2text.FromString(inputHtml)
if err != nil {
panic(err)
}
fmt.Println(text)
}
```
Output:
```
Mega Service ( http://mymegaservice.com/ )
******************************************
Welcome to your new account on my service!
******************************************
Here is some more information:
* Link 1: Example.com ( https://example.com )
* Link 2: Example2.com ( https://example2.com )
* Something else
```
## Unit-tests
Running the unit-tests is straightforward and standard:
```bash
go test
```
# License
Permissive MIT license.
## Contact
You are more than welcome to open issues and send pull requests if you find a bug or want a new feature.
If you appreciate this library please feel free to drop me a line and tell me! It's always nice to hear from people who have benefitted from my work.
Email: jay at (my github username).com
Twitter: [@jtaylor](https://twitter.com/jtaylor)

300
vendor/github.com/jaytaylor/html2text/html2text.go generated vendored Normal file
View file

@ -0,0 +1,300 @@
package html2text
import (
"bytes"
"io"
"regexp"
"strings"
"unicode"
"golang.org/x/net/html"
"golang.org/x/net/html/atom"
)
var (
spacingRe = regexp.MustCompile(`[ \r\n\t]+`)
newlineRe = regexp.MustCompile(`\n\n+`)
)
type textifyTraverseCtx struct {
Buf bytes.Buffer
prefix string
blockquoteLevel int
lineLength int
endsWithSpace bool
endsWithNewline bool
justClosedDiv bool
}
func (ctx *textifyTraverseCtx) traverse(node *html.Node) error {
switch node.Type {
default:
return ctx.traverseChildren(node)
case html.TextNode:
data := strings.Trim(spacingRe.ReplaceAllString(node.Data, " "), " ")
return ctx.emit(data)
case html.ElementNode:
ctx.justClosedDiv = false
switch node.DataAtom {
case atom.Br:
return ctx.emit("\n")
case atom.H1, atom.H2, atom.H3:
subCtx := textifyTraverseCtx{}
if err := subCtx.traverseChildren(node); err != nil {
return err
}
str := subCtx.Buf.String()
dividerLen := 0
for _, line := range strings.Split(str, "\n") {
if lineLen := len([]rune(line)); lineLen-1 > dividerLen {
dividerLen = lineLen - 1
}
}
divider := ""
if node.DataAtom == atom.H1 {
divider = strings.Repeat("*", dividerLen)
} else {
divider = strings.Repeat("-", dividerLen)
}
if node.DataAtom == atom.H3 {
return ctx.emit("\n\n" + str + "\n" + divider + "\n\n")
}
return ctx.emit("\n\n" + divider + "\n" + str + "\n" + divider + "\n\n")
case atom.Blockquote:
ctx.blockquoteLevel++
ctx.prefix = strings.Repeat(">", ctx.blockquoteLevel) + " "
if err := ctx.emit("\n"); err != nil {
return err
}
if ctx.blockquoteLevel == 1 {
if err := ctx.emit("\n"); err != nil {
return err
}
}
if err := ctx.traverseChildren(node); err != nil {
return err
}
ctx.blockquoteLevel--
ctx.prefix = strings.Repeat(">", ctx.blockquoteLevel)
if ctx.blockquoteLevel > 0 {
ctx.prefix += " "
}
return ctx.emit("\n\n")
case atom.Div:
if ctx.lineLength > 0 {
if err := ctx.emit("\n"); err != nil {
return err
}
}
if err := ctx.traverseChildren(node); err != nil {
return err
}
var err error
if ctx.justClosedDiv == false {
err = ctx.emit("\n")
}
ctx.justClosedDiv = true
return err
case atom.Li:
if err := ctx.emit("* "); err != nil {
return err
}
if err := ctx.traverseChildren(node); err != nil {
return err
}
return ctx.emit("\n")
case atom.B, atom.Strong:
subCtx := textifyTraverseCtx{}
subCtx.endsWithSpace = true
if err := subCtx.traverseChildren(node); err != nil {
return err
}
str := subCtx.Buf.String()
return ctx.emit("*" + str + "*")
case atom.A:
// If image is the only child, take its alt text as the link text
if img := node.FirstChild; img != nil && node.LastChild == img && img.DataAtom == atom.Img {
if altText := getAttrVal(img, "alt"); altText != "" {
ctx.emit(altText)
}
} else if err := ctx.traverseChildren(node); err != nil {
return err
}
hrefLink := ""
if attrVal := getAttrVal(node, "href"); attrVal != "" {
attrVal = ctx.normalizeHrefLink(attrVal)
if attrVal != "" {
hrefLink = "( " + attrVal + " )"
}
}
return ctx.emit(hrefLink)
case atom.P, atom.Ul, atom.Table:
if err := ctx.emit("\n\n"); err != nil {
return err
}
if err := ctx.traverseChildren(node); err != nil {
return err
}
return ctx.emit("\n\n")
case atom.Tr:
if err := ctx.traverseChildren(node); err != nil {
return err
}
return ctx.emit("\n")
case atom.Style, atom.Script, atom.Head:
// Ignore the subtree
return nil
default:
return ctx.traverseChildren(node)
}
}
}
func (ctx *textifyTraverseCtx) traverseChildren(node *html.Node) error {
for c := node.FirstChild; c != nil; c = c.NextSibling {
if err := ctx.traverse(c); err != nil {
return err
}
}
return nil
}
func (ctx *textifyTraverseCtx) emit(data string) error {
if len(data) == 0 {
return nil
}
lines := ctx.breakLongLines(data)
var err error
for _, line := range lines {
runes := []rune(line)
startsWithSpace := unicode.IsSpace(runes[0])
if !startsWithSpace && !ctx.endsWithSpace {
ctx.Buf.WriteByte(' ')
ctx.lineLength++
}
ctx.endsWithSpace = unicode.IsSpace(runes[len(runes)-1])
for _, c := range line {
_, err = ctx.Buf.WriteString(string(c))
if err != nil {
return err
}
ctx.lineLength++
if c == '\n' {
ctx.lineLength = 0
if ctx.prefix != "" {
_, err = ctx.Buf.WriteString(ctx.prefix)
if err != nil {
return err
}
}
}
}
}
return nil
}
func (ctx *textifyTraverseCtx) breakLongLines(data string) []string {
// only break lines when we are in blockquotes
if ctx.blockquoteLevel == 0 {
return []string{data}
}
var ret []string
runes := []rune(data)
l := len(runes)
existing := ctx.lineLength
if existing >= 74 {
ret = append(ret, "\n")
existing = 0
}
for l+existing > 74 {
i := 74 - existing
for i >= 0 && !unicode.IsSpace(runes[i]) {
i--
}
if i == -1 {
// no spaces, so go the other way
i = 74 - existing
for i < l && !unicode.IsSpace(runes[i]) {
i++
}
}
ret = append(ret, string(runes[:i])+"\n")
for i < l && unicode.IsSpace(runes[i]) {
i++
}
runes = runes[i:]
l = len(runes)
existing = 0
}
if len(runes) > 0 {
ret = append(ret, string(runes))
}
return ret
}
func (ctx *textifyTraverseCtx) normalizeHrefLink(link string) string {
link = strings.TrimSpace(link)
link = strings.TrimPrefix(link, "mailto:")
return link
}
func getAttrVal(node *html.Node, attrName string) string {
for _, attr := range node.Attr {
if attr.Key == attrName {
return attr.Val
}
}
return ""
}
func FromReader(reader io.Reader) (string, error) {
doc, err := html.Parse(reader)
if err != nil {
return "", err
}
ctx := textifyTraverseCtx{
Buf: bytes.Buffer{},
}
if err = ctx.traverse(doc); err != nil {
return "", err
}
text := strings.TrimSpace(newlineRe.ReplaceAllString(
strings.Replace(ctx.Buf.String(), "\n ", "\n", -1), "\n\n"))
return text, nil
}
func FromString(input string) (string, error) {
text, err := FromReader(strings.NewReader(input))
if err != nil {
return "", err
}
return text, nil
}

23
vendor/github.com/joho/godotenv/LICENCE generated vendored Normal file
View file

@ -0,0 +1,23 @@
Copyright (c) 2013 John Barton
MIT License
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

127
vendor/github.com/joho/godotenv/README.md generated vendored Normal file
View file

@ -0,0 +1,127 @@
# GoDotEnv [![wercker status](https://app.wercker.com/status/507594c2ec7e60f19403a568dfea0f78 "wercker status")](https://app.wercker.com/project/bykey/507594c2ec7e60f19403a568dfea0f78)
A Go (golang) port of the Ruby dotenv project (which loads env vars from a .env file)
From the original Library:
> Storing configuration in the environment is one of the tenets of a twelve-factor app. Anything that is likely to change between deployment environmentssuch as resource handles for databases or credentials for external servicesshould be extracted from the code into environment variables.
>
> But it is not always practical to set environment variables on development machines or continuous integration servers where multiple projects are run. Dotenv load variables from a .env file into ENV when the environment is bootstrapped.
It can be used as a library (for loading in env for your own daemons etc) or as a bin command.
There is test coverage and CI for both linuxish and windows environments, but I make no guarantees about the bin version working on windows.
## Installation
As a library
```shell
go get github.com/joho/godotenv
```
or if you want to use it as a bin command
```shell
go get github.com/joho/godotenv/cmd/godotenv
```
## Usage
Add your application configuration to your `.env` file in the root of your project:
```shell
S3_BUCKET=YOURS3BUCKET
SECRET_KEY=YOURSECRETKEYGOESHERE
```
Then in your Go app you can do something like
```go
package main
import (
"github.com/joho/godotenv"
"log"
"os"
)
func main() {
err := godotenv.Load()
if err != nil {
log.Fatal("Error loading .env file")
}
s3Bucket := os.Getenv("S3_BUCKET")
secretKey := os.Getenv("SECRET_KEY")
// now do something with s3 or whatever
}
```
If you're even lazier than that, you can just take advantage of the autoload package which will read in `.env` on import
```go
import _ "github.com/joho/godotenv/autoload"
```
While `.env` in the project root is the default, you don't have to be constrained, both examples below are 100% legit
```go
_ = godotenv.Load("somerandomfile")
_ = godotenv.Load("filenumberone.env", "filenumbertwo.env")
```
If you want to be really fancy with your env file you can do comments and exports (below is a valid env file)
```shell
# I am a comment and that is OK
SOME_VAR=someval
FOO=BAR # comments at line end are OK too
export BAR=BAZ
```
Or finally you can do YAML(ish) style
```yaml
FOO: bar
BAR: baz
```
as a final aside, if you don't want godotenv munging your env you can just get a map back instead
```go
var myEnv map[string]string
myEnv, err := godotenv.Read()
s3Bucket := myEnv["S3_BUCKET"]
```
### Command Mode
Assuming you've installed the command as above and you've got `$GOPATH/bin` in your `$PATH`
```
godotenv -f /some/path/to/.env some_command with some args
```
If you don't specify `-f` it will fall back on the default of loading `.env` in `PWD`
## Contributing
Contributions are most welcome! The parser itself is pretty stupidly naive and I wouldn't be surprised if it breaks with edge cases.
*code changes without tests will not be accepted*
1. Fork it
2. Create your feature branch (`git checkout -b my-new-feature`)
3. Commit your changes (`git commit -am 'Added some feature'`)
4. Push to the branch (`git push origin my-new-feature`)
5. Create new Pull Request
## CI
Linux: [![wercker status](https://app.wercker.com/status/507594c2ec7e60f19403a568dfea0f78/m "wercker status")](https://app.wercker.com/project/bykey/507594c2ec7e60f19403a568dfea0f78) Windows: [![Build status](https://ci.appveyor.com/api/projects/status/9v40vnfvvgde64u4)](https://ci.appveyor.com/project/joho/godotenv)
## Who?
The original library [dotenv](https://github.com/bkeepers/dotenv) was written by [Brandon Keepers](http://opensoul.org/), and this port was done by [John Barton](http://whoisjohnbarton.com) based off the tests/fixtures in the original library.

15
vendor/github.com/joho/godotenv/autoload/autoload.go generated vendored Normal file
View file

@ -0,0 +1,15 @@
package autoload
/*
You can just read the .env file on import just by doing
import _ "github.com/joho/godotenv/autoload"
And bob's your mother's brother
*/
import "github.com/joho/godotenv"
func init() {
godotenv.Load()
}

Some files were not shown because too many files have changed in this diff Show more