SBT V0-13-Reference
SBT V0-13-Reference
SBT V0-13-Reference
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Features of sbt . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
General Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Credits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Community Plugins . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
sbt Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Community Ivy Repository . . . . . . . . . . . . . . . . . . . . . 17
Available Plugins . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Test plugins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Community Repository Policy . . . . . . . . . . . . . . . . . . . . . . . 24
Bintray For Plugins . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Create an account on Bintray . . . . . . . . . . . . . . . . . . . . 25
Create a repository for your sbt plugins . . . . . . . . . . . . . . 25
Add the bintray-sbt plugin to your build. . . . . . . . . . . . . . 25
Make a release . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Linking your package to the sbt organization . . . . . . . . . . . 27
Linking your package to the sbt organization (sbt org admins) . 27
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Setup Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1
Do not put sbt-launch.jar on your classpath. . . . . . . . . . . 27
Terminal encoding . . . . . . . . . . . . . . . . . . . . . . . . . . 28
JVM heap, permgen, and stack sizes . . . . . . . . . . . . . . . . 28
Boot directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
HTTP/HTTPS/FTP Proxy . . . . . . . . . . . . . . . . . . . . . 28
Deploying to Sonatype . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
First - PGP Signatures . . . . . . . . . . . . . . . . . . . . . . . . 29
Second - Maven Publishing Settings . . . . . . . . . . . . . . . . 30
Third - POM Metadata . . . . . . . . . . . . . . . . . . . . . . . 30
Fourth - Adding credentials . . . . . . . . . . . . . . . . . . . . . 31
Finally - Publish . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
0.13.5-RC1 to 0.13.5-RC2 . . . . . . . . . . . . . . . . . . . . . . 34
0.13.2 to 0.13.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
0.13.1 to 0.13.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
0.13.0 to 0.13.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
0.12.4 to 0.13.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
sbt 0.13.0 Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Details of major changes . . . . . . . . . . . . . . . . . . . . . . . 39
sbt 0.12.0 Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Details of major changes from 0.11.2 to 0.12.0 . . . . . . . . . . . 45
scala-library.jar . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Older Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
0.12.3 to 0.12.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
0.12.2 to 0.12.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
0.12.1 to 0.12.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
0.12.0 to 0.12.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
0.11.3 to 0.12.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
0.11.2 to 0.11.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2
0.11.1 to 0.11.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
0.11.0 to 0.11.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
0.10.1 to 0.11.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
0.10.0 to 0.10.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
0.7.7 to 0.10.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
0.7.5 to 0.7.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
0.7.4 to 0.7.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
0.7.3 to 0.7.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
0.7.2 to 0.7.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
0.7.1 to 0.7.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
0.7.0 to 0.7.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
0.5.6 to 0.7.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
0.5.5 to 0.5.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
0.5.4 to 0.5.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
0.5.2 to 0.5.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
0.5.1 to 0.5.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
0.4.6 to 0.5/0.5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
0.4.5 to 0.4.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
0.4.3 to 0.4.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
0.4 to 0.4.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
0.3.7 to 0.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
0.3.6 to 0.3.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
0.3.5 to 0.3.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
0.3.2 to 0.3.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
0.3.1 to 0.3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
0.3 to 0.3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
0.2.3 to 0.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
0.2.2 to 0.2.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
0.2.1 to 0.2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
0.2.0 to 0.2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
0.1.9 to 0.2.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3
0.1.8 to 0.1.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
0.1.7 to 0.1.8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
0.1.6 to 0.1.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
0.1.5 to 0.1.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
0.1.4 to 0.1.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
0.1.3 to 0.1.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
0.1.2 to 0.1.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
0.1.1 to 0.1.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
0.1 to 0.1.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Migrating from 0.7 to 0.10+ . . . . . . . . . . . . . . . . . . . . . . . . 76
Why move to 0.13.5? . . . . . . . . . . . . . . . . . . . . . . . . . 77
Preserve project/ for 0.7.x project . . . . . . . . . . . . . . . . . 77
Create build.sbt for 0.13.5 . . . . . . . . . . . . . . . . . . . . . 77
Run sbt 0.13.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Switching back to sbt 0.7.x . . . . . . . . . . . . . . . . . . . . . 78
Contributing to sbt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Detailed Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Using sbt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Command Line Reference . . . . . . . . . . . . . . . . . . . . . . . . . 80
Notes on the command line . . . . . . . . . . . . . . . . . . . . . 81
Project-level tasks . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Configuration-level tasks . . . . . . . . . . . . . . . . . . . . . . . 81
General commands . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Commands for managing the build definition . . . . . . . . . . . 84
Command Line Options . . . . . . . . . . . . . . . . . . . . . . . 84
Console Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Accessing settings . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Evaluating tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4
Cross-building . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Publishing Conventions . . . . . . . . . . . . . . . . . . . . . . . 88
Using Cross-Built Libraries . . . . . . . . . . . . . . . . . . . . . 88
Cross-Building a Project . . . . . . . . . . . . . . . . . . . . . . . 89
Interacting with the Configuration System . . . . . . . . . . . . . . . . 90
Selecting commands, tasks, and settings . . . . . . . . . . . . . . 91
Discovering Settings and Tasks . . . . . . . . . . . . . . . . . . . 92
Triggered Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Compile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Running Multiple Commands . . . . . . . . . . . . . . . . . . . . 97
Scripts, REPL, and Dependencies . . . . . . . . . . . . . . . . . . . . . 97
Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Understanding Incremental Recompilation . . . . . . . . . . . . . . . . 100
sbt heuristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
What is included in the interface of a Scala class . . . . . . . . . 102
How to take advantage of sbt heuristics . . . . . . . . . . . . . . 104
Further references . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Classpaths, sources, and resources . . . . . . . . . . . . . . . . . . . . 107
Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Compiler Plugin Support . . . . . . . . . . . . . . . . . . . . . . . . . 110
Continuations Plugin Example . . . . . . . . . . . . . . . . . . . 111
Version-specific Compiler Plugin Example . . . . . . . . . . . . . 111
Configuring Scala . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Automatically managed Scala . . . . . . . . . . . . . . . . . . . . 111
Using Scala from a local directory . . . . . . . . . . . . . . . . . 113
sbt’s Scala version . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Forking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
5
Enable forking . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Change working directory . . . . . . . . . . . . . . . . . . . . . . 115
Forked JVM options . . . . . . . . . . . . . . . . . . . . . . . . . 116
Java Home . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Configuring output . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Configuring Input . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Direct Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Global Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Basic global configuration file . . . . . . . . . . . . . . . . . . . . 118
Global Settings using a Global Plugin . . . . . . . . . . . . . . . 118
Java Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Mapping Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Relative to a directory . . . . . . . . . . . . . . . . . . . . . . . . 120
Rebase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Flatten . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Local Scala . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Macro Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Defining the Project Relationships . . . . . . . . . . . . . . . . . 123
Common Interface . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Constructing a File . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Path Finders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
File Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Parallel Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Task ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Practical constraints . . . . . . . . . . . . . . . . . . . . . . . . . 130
Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
6
Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
External Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Running Project Code . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
sbt’s Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Additional test configurations . . . . . . . . . . . . . . . . . . . . 143
JUnit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Dependency Management . . . . . . . . . . . . . . . . . . . . . . . . . 148
Artifacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Selecting default artifacts . . . . . . . . . . . . . . . . . . . . . . 148
Modifying default artifacts . . . . . . . . . . . . . . . . . . . . . . 149
Defining custom artifacts . . . . . . . . . . . . . . . . . . . . . . 150
Publishing .war files . . . . . . . . . . . . . . . . . . . . . . . . . 151
Using dependencies with artifacts . . . . . . . . . . . . . . . . . . 151
Dependency Management Flow . . . . . . . . . . . . . . . . . . . . . . 152
Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Caching and Configuration . . . . . . . . . . . . . . . . . . . . . 152
General troubleshooting steps . . . . . . . . . . . . . . . . . . . . 153
Plugins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Library Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Manual Dependency Management . . . . . . . . . . . . . . . . . 154
Automatic Dependency Management . . . . . . . . . . . . . . . . 155
Proxy Repositories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
7
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
sbt Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
~/.sbt/repositories . . . . . . . . . . . . . . . . . . . . . . . . 167
Proxying Ivy Repositories . . . . . . . . . . . . . . . . . . . . . . 168
Publishing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Define the repository . . . . . . . . . . . . . . . . . . . . . . . . . 168
Credentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Cross-publishing . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Published artifacts . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Modifying the generated POM . . . . . . . . . . . . . . . . . . . 171
Publishing Locally . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Resolvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Maven . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Predefined . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Custom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Update Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Filtering a Report and Getting Artifacts . . . . . . . . . . . . . . 176
Tasks and Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
Defining a Task . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
Getting values from multiple scopes . . . . . . . . . . . . . . . . 183
Advanced Task Operations . . . . . . . . . . . . . . . . . . . . . 187
Dynamic Computations with Def.taskDyn . . . . . . . . . . . . 188
Input Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Input Keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Basic Input Task Definition . . . . . . . . . . . . . . . . . . . . . 193
Input Task using Parsers . . . . . . . . . . . . . . . . . . . . . . . 194
The InputTask type . . . . . . . . . . . . . . . . . . . . . . . . . 195
Using other input tasks . . . . . . . . . . . . . . . . . . . . . . . 195
8
Preapplying input . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Get a Task from an InputTask . . . . . . . . . . . . . . . . . . . 197
Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
What is a “command”? . . . . . . . . . . . . . . . . . . . . . . . 199
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
Defining a Command . . . . . . . . . . . . . . . . . . . . . . . . . 200
Full Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Parsing and tab completion . . . . . . . . . . . . . . . . . . . . . . . . 203
Basic parsers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Built-in parsers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
Combining parsers . . . . . . . . . . . . . . . . . . . . . . . . . . 204
Transforming results . . . . . . . . . . . . . . . . . . . . . . . . . 205
Controlling tab completion . . . . . . . . . . . . . . . . . . . . . 205
State and actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Command-related data . . . . . . . . . . . . . . . . . . . . . . . . 206
Project-related data . . . . . . . . . . . . . . . . . . . . . . . . . 207
Project data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Classpaths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Running tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Using State in a task . . . . . . . . . . . . . . . . . . . . . . . . . 210
Tasks/Settings: Motivation . . . . . . . . . . . . . . . . . . . . . . . . 210
Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
Plugins and Best Practices . . . . . . . . . . . . . . . . . . . . . . . . 212
General Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
project/ vs. ~/.sbt/ . . . . . . . . . . . . . . . . . . . . . . . . 212
Local settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
.sbtrc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Generated files . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Don’t hard code . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Don’t “mutate” files . . . . . . . . . . . . . . . . . . . . . . . . . 214
Use absolute paths . . . . . . . . . . . . . . . . . . . . . . . . . . 214
9
Parser combinators . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Plugins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Using an auto plugin . . . . . . . . . . . . . . . . . . . . . . . . . 216
By Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Plugin dependencies . . . . . . . . . . . . . . . . . . . . . . . . . 217
Creating an auto plugin . . . . . . . . . . . . . . . . . . . . . . . 218
Using a library in a build definition example . . . . . . . . . . . . 224
Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
Plugins Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
Get your plugins known . . . . . . . . . . . . . . . . . . . . . . . 226
Don’t use default package . . . . . . . . . . . . . . . . . . . . . . 227
Use settings and tasks. Avoid commands. . . . . . . . . . . . . . 227
Use sbt.AutoPlugin . . . . . . . . . . . . . . . . . . . . . . . . . 227
Reuse existing keys . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Avoid namespace clashes . . . . . . . . . . . . . . . . . . . . . . . 227
Provide core feature in a plain old Scala object . . . . . . . . . . 228
Configuration advices . . . . . . . . . . . . . . . . . . . . . . . . 228
Mucking with globalSettings . . . . . . . . . . . . . . . . . . . 231
Sbt Launcher . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
Getting Started with the Sbt Launcher . . . . . . . . . . . . . . . . . . 232
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
Sbt Launcher Architecture . . . . . . . . . . . . . . . . . . . . . . . . . 236
Module Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . 236
Classloader Caching and Isolation . . . . . . . . . . . . . . . . . . 236
Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
Locking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
Service Discovery and Isolation . . . . . . . . . . . . . . . . . . . 238
Sbt Launcher Configuration . . . . . . . . . . . . . . . . . . . . . . . . 239
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
Variable Substitution . . . . . . . . . . . . . . . . . . . . . . . . . 242
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
10
Developer’s Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
Core Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Introduction to build state . . . . . . . . . . . . . . . . . . . . . . 244
Settings Architecture . . . . . . . . . . . . . . . . . . . . . . . . . 245
Task Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
Settings Core . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
sbt Settings Discussion . . . . . . . . . . . . . . . . . . . . . . . . 250
Setting Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Controlling Initialization . . . . . . . . . . . . . . . . . . . . . . . 253
Build Loaders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Custom Resolver . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Custom Builder . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
Custom Transformer . . . . . . . . . . . . . . . . . . . . . . . . . 259
The BuildDependencies type . . . . . . . . . . . . . . . . . . . . 260
Creating Command Line Applications Using sbt . . . . . . . . . . . . 260
Hello World Example . . . . . . . . . . . . . . . . . . . . . . . . . 261
Nightly Builds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
How to… . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
Classpaths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
Include a new type of managed artifact on the classpath, such as
mar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
Get the classpath used for compilation . . . . . . . . . . . . . . . 264
Get the runtime classpath, including the project’s compiled classes264
Get the test classpath, including the project’s compiled test classes265
Use packaged jars on classpaths instead of class directories . . . . 265
Get all managed jars for a configuration . . . . . . . . . . . . . . 266
Get the files included in a classpath . . . . . . . . . . . . . . . . 266
Get the module and artifact that produced a classpath entry . . 266
Customizing paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Change the default Scala source directory . . . . . . . . . . . . . 267
11
Change the default Java source directory . . . . . . . . . . . . . . 267
Change the default resource directory . . . . . . . . . . . . . . . 267
Change the default (unmanaged) library directory . . . . . . . . 268
Disable using the project’s base directory as a source directory . 268
Add an additional source directory . . . . . . . . . . . . . . . . . 268
Add an additional resource directory . . . . . . . . . . . . . . . . 269
Include/exclude files in the source directory . . . . . . . . . . . . 269
Include/exclude files in the resource directory . . . . . . . . . . . 269
Include only certain (unmanaged) libraries . . . . . . . . . . . . . 270
Generating files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Generate sources . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Generate resources . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Inspect the build . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Show or search help for a command, task, or setting . . . . . . . 272
List available tasks . . . . . . . . . . . . . . . . . . . . . . . . . . 273
List available settings . . . . . . . . . . . . . . . . . . . . . . . . 273
Display the description and type of a setting or task . . . . . . . 274
Display the delegation chain of a setting or task . . . . . . . . . . 275
Show the list of projects and builds . . . . . . . . . . . . . . . . . 275
Show the current session (temporary) settings . . . . . . . . . . . 276
Show basic information about sbt and the current build . . . . . 276
Show the value of a setting . . . . . . . . . . . . . . . . . . . . . 276
Show the result of executing a task . . . . . . . . . . . . . . . . . 276
Show the classpath used for compilation or testing . . . . . . . . 277
Show the main classes detected in a project . . . . . . . . . . . . 277
Show the test classes detected in a project . . . . . . . . . . . . . 277
Interactive mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
Use tab completion . . . . . . . . . . . . . . . . . . . . . . . . . . 278
Show more tab completion suggestions . . . . . . . . . . . . . . . 278
Modify the default JLine keybindings . . . . . . . . . . . . . . . . 279
Configure the prompt string . . . . . . . . . . . . . . . . . . . . . 279
12
Use history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
Change the location of the interactive history file . . . . . . . . . 279
Use the same history for all projects . . . . . . . . . . . . . . . . 280
Disable interactive history . . . . . . . . . . . . . . . . . . . . . . 280
Run commands before entering interactive mode . . . . . . . . . 280
Configure and use logging . . . . . . . . . . . . . . . . . . . . . . . . . 281
View the logging output of the previously executed command . . 281
View the previous logging output of a specific task . . . . . . . . 282
Show warnings from the previous compilation . . . . . . . . . . . 283
Change the logging level globally . . . . . . . . . . . . . . . . . . 283
Change the logging level for a specific task, configuration, or project284
Configure printing of stack traces . . . . . . . . . . . . . . . . . . 284
Print the output of tests immediately instead of buffering . . . . 285
Add a custom logger . . . . . . . . . . . . . . . . . . . . . . . . . 285
Log messages in a task . . . . . . . . . . . . . . . . . . . . . . . . 285
Project metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
Set the project name . . . . . . . . . . . . . . . . . . . . . . . . . 285
Set the project version . . . . . . . . . . . . . . . . . . . . . . . . 286
Set the project organization . . . . . . . . . . . . . . . . . . . . . 286
Set the project’s homepage and other metadata . . . . . . . . . . 286
Configure packaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
Use the packaged jar on classpaths instead of class directory . . . 286
Add manifest attributes . . . . . . . . . . . . . . . . . . . . . . . 287
Change the file name of a package . . . . . . . . . . . . . . . . . 287
Modify the contents of the package . . . . . . . . . . . . . . . . . 288
Running commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
Pass arguments to a command or task in batch mode . . . . . . . 288
Provide multiple commands to run consecutively . . . . . . . . . 288
Read commands from a file . . . . . . . . . . . . . . . . . . . . . 288
Define an alias for a command or task . . . . . . . . . . . . . . . 289
Quickly evaluate a Scala expression . . . . . . . . . . . . . . . . . 289
13
Configure and use Scala . . . . . . . . . . . . . . . . . . . . . . . . . . 289
Set the Scala version used for building the project . . . . . . . . 289
Disable the automatic dependency on the Scala library . . . . . . 290
Temporarily switch to a different Scala version . . . . . . . . . . 290
Use a local Scala installation for building a project . . . . . . . . 290
Build a project against multiple Scala versions . . . . . . . . . . 290
Enter the Scala REPL with a project’s dependencies on the class-
path, but not the compiled project classes . . . . . . . . . 290
Enter the Scala REPL with a project’s dependencies and com-
piled code on the classpath . . . . . . . . . . . . . . . . . 290
Enter the Scala REPL with plugins and the build definition on
the classpath . . . . . . . . . . . . . . . . . . . . . . . . . 291
Define the initial commands evaluated when entering the Scala
REPL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Define the commands evaluated when exiting the Scala REPL . . 291
Use the Scala REPL from project code . . . . . . . . . . . . . . . 292
Generate API documentation . . . . . . . . . . . . . . . . . . . . . . . 292
Select javadoc or scaladoc . . . . . . . . . . . . . . . . . . . . . . 292
Set the options used for generating scaladoc independently of
compilation . . . . . . . . . . . . . . . . . . . . . . . . . . 292
Add options for scaladoc to the compilation options . . . . . . . 293
Set the options used for generating javadoc independently of com-
pilation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
Add options for javadoc to the compilation options . . . . . . . . 293
Enable automatic linking to the external Scaladoc of managed
dependencies . . . . . . . . . . . . . . . . . . . . . . . . . 293
Enable manual linking to the external Scaladoc of managed de-
pendencies . . . . . . . . . . . . . . . . . . . . . . . . . . 293
Define the location of API documentation for a library . . . . . . 294
Triggered execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
Run a command when sources change . . . . . . . . . . . . . . . 294
Run multiple commands when sources change . . . . . . . . . . . 295
Configure the sources that are checked for changes . . . . . . . . 295
Set the time interval between checks for changes to sources . . . 295
14
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
.sbt build examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
.scala build example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
External Builds . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Advanced configurations example . . . . . . . . . . . . . . . . . . . . . 303
Advanced command example . . . . . . . . . . . . . . . . . . . . . . . 305
Frequently Asked Questions . . . . . . . . . . . . . . . . . . . . . . . . 306
Project Information . . . . . . . . . . . . . . . . . . . . . . . . . 306
Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
Build definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
Extending sbt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
Dependency Management . . . . . . . . . . . . . . . . . . . . . . 314
Miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
0.7 to 0.10+ Migration . . . . . . . . . . . . . . . . . . . . . . . . 315
My tests all run really fast but some are broken that weren’t in
0.7! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
Values and Types . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
Preface
sbt is a build tool for Scala, Java, and more. It requires Java 1.6 or later.
Install
Getting Started
To get started, please read the Getting Started Guide. You will save yourself a
lot of time if you have the right understanding of the big picture up-front. All
documentation may be found via the table of contents included at the end of
every page.
15
Use Stack Overflow for questions. Use the sbt-dev mailing list for discussing sbt
development. Use [@scala_sbt](https://twitter.com/scala_sbt) for questions
and discussions.
Features of sbt
Also
General Information
Credits
See the sbt contributors on GitHub and sbt GitHub organization members.
16
Additionally, these people have contributed ideas, documentation, or code to
sbt but are not recorded in either of the above:
• Josh Cough
• Nolan Darilek
• Nathan Hamblen
• Ismael Juma
• Viktor Klang
• David R. MacIver
• Ross McDonald
• Andrew O’Malley
• Jorge Ortiz
• Mikko Peltonen
• Ray Racine
• Stuart Roebuck
• Harshad RJ
• Tony Sloane
• Seth Tisue
• Francisco Treacy
• Vesa Vilhonen
Community Plugins
sbt Organization
The sbt organization is available for use by any sbt plugin. Developers who
contribute their plugins into the community organization will still retain control
over their repository and its access. The goal of the sbt organization is to
organize sbt software into one central location.
A side benefit to using the sbt organization for projects is that you can use
gh-pages to host websites under the http://scala-sbt.org domain.
Typesafe has provided a freely available Ivy Repository for sbt projects to use.
This Ivy repository is mirrored from the freely available Bintray service. If you’d
like to submit your plugin, please follow these instructions: Bintray For Plugins.
Available Plugins
Please feel free to submit a pull request that adds your plugin to the list.
17
Plugins for IDEs
• IntelliJ IDEA
– sbt Plugin to generate IDEA project configuration: https://github.
com/mpeltonen/sbt-idea
– IDEA Plugin to embed an sbt Console into the IDE: https://github.
com/orfjackal/idea-sbt-plugin
• Netbeans (no support to create a new sbt project yet)
– sbt-netbeans-plugin (older): https://github.com/remeniuk/
sbt-netbeans-plugin
– sbt plugin to generate NetBeans configuration: https://github.com/
dcaoyuan/nbsbt
– sbt plugin to add scala support to NetBeans: https://github.com/
dcaoyuan/nbscala
• Eclipse: https://github.com/typesafehub/sbteclipse
• Sublime Text: https://github.com/orrsella/sbt-sublime
• Ensime: https://github.com/aemoncannon/ensime-sbt-cmd
• sbt-mode for Emacs: https://github.com/hvesalai/sbt-mode
• sbt-ctags (manage library dependency sources for vim, emacs, sublime)
https://github.com/kalmanb/sbt-ctags
Web Plugins
• xsbt-web-plugin: https://github.com/JamesEarlDouglas/xsbt-web-plugin
• xsbt-webstart: https://github.com/ritschwumm/xsbt-webstart
• sbt-appengine: https://github.com/sbt/sbt-appengine
• sbt-gwt-plugin: https://github.com/thunderklaus/sbt-gwt-plugin
• sbt-cloudbees-plugin: https://github.com/timperrett/sbt-cloudbees-plugin
• sbt-jelastic-deploy: https://github.com/casualjim/sbt-jelastic-deploy
• sbt-elasticbeanstalk (Deploy WAR files to AWS Elastic Beanstalk): https:
//github.com/sqs/sbt-elasticbeanstalk
• sbt-cloudformation (AWS CloudFormation templates and stacks manage-
ment): https://github.com/tptodorov/sbt-cloudformation
Test plugins
• junit_xml_listener: https://github.com/ijuma/junit_xml_listener
• sbt-growl-plugin: https://github.com/softprops/sbt-growl-plugin
• sbt-teamcity-test-reporting-plugin: https://github.com/guardian/
sbt-teamcity-test-reporting-plugin
• xsbt-cucumber-plugin: https://github.com/skipoleschris/xsbt-cucumber-plugin
18
• sbt-multi-jvm: https://github.com/typesafehub/sbt-multi-jvm
• sbt-testng-interface: https://github.com/sbt/sbt-testng-interface
• sbt-assembly: https://github.com/sbt/sbt-assembly
• xsbt-proguard-plugin: https://github.com/adamw/xsbt-proguard-plugin
• sbt-deploy: https://github.com/reaktor/sbt-deploy
• sbt-appbundle (os x standalone): https://github.com/sbt/sbt-appbundle
• sbt-onejar (Packages your project using One-JAR™): https://github.
com/sbt/sbt-onejar
• coffeescripted-sbt: https://github.com/softprops/coffeescripted-sbt
• less-sbt (for less-1.3.0): https://github.com/softprops/less-sbt
• sbt-less-plugin (it uses less-1.3.0): https://github.com/btd/sbt-less-plugin
• sbt-emberjs: https://github.com/stefri/sbt-emberjs
• sbt-closure: https://github.com/eltimn/sbt-closure
• sbt-imagej: https://github.com/jpsacha/sbt-imagej
• sbt-yui-compressor: https://github.com/indrajitr/sbt-yui-compressor
• sbt-requirejs: https://github.com/scalatra/sbt-requirejs
• sbt-vaadin-plugin: https://github.com/henrikerola/sbt-vaadin-plugin
• sbt-purescript: https://github.com/eamelink/sbt-purescript
• sbt-jasmine-plugin (Run javascript tests with jasmine within sbt): https:
//github.com/joescii/sbt-jasmine-plugin
19
Game development plugins
Release plugins
System plugins
20
Code generator plugins
Database plugins
21
• flyway-sbt (Flyway - The agile database migration framework): http://
flywaydb.org/getstarted/firststeps/sbt.html
• sbt-liquibase (Liquibase RDBMS database migrations): https://github.
com/bigtoast/sbt-liquibase
• sbt-dbdeploy (dbdeploy, a database change management tool): https://
github.com/mr-ken/sbt-dbdeploy
Documentation plugins
Utility plugins
22
• sbt-editsource (A poor man’s sed(1), for sbt): http://software.clapper.org/
sbt-editsource/
• sbt-conflict-classes (Show conclict classes from classpath): https://github.
com/todesking/sbt-conflict-classes
• sbt-cross-building (Simplifies building your plugins for multiple versions
of sbt): https://github.com/jrudolph/sbt-cross-building
• sbt-doge (aggregates tasks across subprojects and their crossScalaVersions):
https://github.com/sbt/sbt-doge
• sbt-revolver (Triggered restart, hot reloading): https://github.com/spray/
sbt-revolver
• sbt-scalaedit (Open and upgrade ScalaEdit (text editor)): https://github.
com/kjellwinblad/sbt-scalaedit-plugin
• sbt-man (Looks up scaladoc): https://github.com/sbt/sbt-man
• sbt-taglist (Looks for TODO-tags in the sources): https://github.com/
johanandren/sbt-taglist
• migration-manager: https://github.com/typesafehub/migration-manager
• sbt-scalariform (adding support for source code formatting using Scalari-
form): https://github.com/sbt/sbt-scalariform
• sbt-aspectj: https://github.com/sbt/sbt-aspectj
• sbt-properties: https://github.com/sbt/sbt-properties
• sbt-multi-publish (publish to more than one repository simultaneously):
https://github.com/davidharcombe/sbt-multi-publish
• sbt-about-plugins (shows some details about plugins loaded): https://
github.com/jozic/sbt-about-plugins
• sbt-one-log (make Log dependency easy): https://github.com/zavakid/
sbt-one-log
• sbt-git-stamp (include git metadata in MANIFEST.MF file in artifact):
https://bitbucket.org/pkaeding/sbt-git-stamp
• fm-sbt-s3-resolver (Resolve and Publish using Amazon S3): https:
//github.com/frugalmechanic/fm-sbt-s3-resolver
• sbt-scct: https://github.com/sqality/sbt-scct
• sbt-scoverage: https://github.com/scoverage/sbt-scoverage
• jacoco4sbt: https://github.com/sbt/jacoco4sbt
• xsbt-coveralls-plugin: https://github.com/theon/xsbt-coveralls-plugin
Android plugin
• android-plugin: https://github.com/jberkel/android-plugin
• android-sdk-plugin: https://github.com/pfn/android-sdk-plugin
23
Build interoperability plugins
• ant4sbt: https://github.com/sbt/ant4sbt
OSGi plugin
• sbtosgi: https://github.com/typesafehub/sbtosgi
Plugin bundles
The community repository has the following guideline for artifacts published to
it:
1. All published artifacts are the authors own work or have an appropriate
license which grants distribution rights.
2. All published artifacts come from open source projects, that have an open
patch acceptance policy.
3. All published artifacts are placed under an organization in a DNS domain
for which you have the permission to use or are an owner (scala-sbt.org is
available for sbt plugins).
4. All published artifacts are signed by a committer of the project (coming
soon).
24
Create an account on Bintray
First, go to http://bintray.com. Click on the sign in link on the top left, and
then the sign up button.
Note: If you had an account on repo.scala-sbt.org previous, please use the same
email address when you create this account.
Now, we’ll create a repository to host our personal sbt plugins. In bintray, create
a generic repository called sbt-plugins.
First, go to your user page and click on the new repository link:
You should see the following dialog:
Fill it out similarly to the above image, the settings are:
• Name: sbt-plugins
• Type: Generic
• Desc: My sbt plugins
• Tags: sbt
Once this is done, you can begin to configure your sbt-plugins to publish to
bintray.
resolvers += Resolver.url(
"bintray-sbt-plugin-releases",
url("http://dl.bintray.com/content/sbt/sbt-plugin-releases"))(
Resolver.ivyStylePatterns)
Next, a make sure your build.sbt file has the following settings :
import bintray.Keys._
sbtPlugin := true
25
name := "<YOUR PLUGIN HERE>"
publishMavenStyle := false
bintrayPublishSettings
Make sure your project has a valid license specified, as well as unique name and
organization.
Make a release
Once your build is configured, open the sbt console in your build and run
sbt> publish
The plugin will ask you for your credentials. If you don’t know where they are,
you can find them on Bintray.
This will get you your password. The bintray-sbt plugin will save your API key
for future use.
NOTE: We have to do this before we can link our package to the sbt org.
26
Linking your package to the sbt organization
Now that your plugin is packaged on bintray, you can include it in the commu-
nity sbt repository. To do so, go to the Community sbt repository screen.
1. Click the green include my package button and select your plugin.
2. Search for your plugin by name and click on the link.
3. Your request should be automatically filled out, just click send
4. Shortly, one of the sbt repository admins will approve your link request.
From here on, any releases of your plugin will automatically appear in the
community sbt repository. Congratulations and thank you so much for your
contributions!
If you’re a member of the sbt organization on bintray, you can link your package
to the sbt organization, but via a different means. To do so, first navigate to
the plugin you wish to include and click on the link button:
After clicking this you should see a link like the following:
Click on the sbt/sbt-plugin-releases repository and you’re done! Any future
releases will be included in the sbt-plugin repository.
Summary
After setting up the repository, all new releases will automatically be included
the sbt-plugin-releases repository, available for all users. When you create a
new plugin, after the initial release you’ll have to link it to the sbt community
repository, but the rest of the setup should already be completed. Thanks for
you contributions and happy hacking.
Setup Notes
27
Terminal encoding
The character encoding used by your terminal may differ from Java’s default
encoding for your platform. In this case, you will need to add the option
-Dfile.encoding=<encoding> in your sbt script to set the encoding, which
might look like:
java -Dfile.encoding=UTF8
If you find yourself running out of permgen space or your workstation is low on
memory, adjust the JVM configuration as you would for any application. For
example a common set of memory-related options is:
Boot directory
sbt-launch.jar is just a bootstrap; the actual meat of sbt, and the Scala
compiler and standard library, are downloaded to the shared directory
$HOME/.sbt/boot/.
To change the location of this directory, set the sbt.boot.directory system
property in your sbt script. A relative path will be resolved against the current
working directory, which can be useful if you want to avoid sharing the boot
directory between projects. For example, the following uses the pre-0.11 style
of putting the boot directory in project/boot/:
java -Dsbt.boot.directory=project/boot/
HTTP/HTTPS/FTP Proxy
On Unix, sbt will pick up any HTTP, HTTPS, or FTP proxy settings from the
standard http_proxy, https_proxy, and ftp_proxy environment variables. If
you are behind a proxy requiring authentication, your sbt script must also
pass flags to set the http.proxyUser and http.proxyPassword properties
for HTTP, ftp.proxyUser and ftp.proxyPassword properties for FTP, or
https.proxyUser and https.proxyPassword properties for HTTPS.
For example,
28
On Windows, your script should set properties for proxy host, port, and if
applicable, username and password. For example, for HTTP:
Replace http with https or ftp in the above command line to configure HTTPS
or FTP.
Deploying to Sonatype
You’ll need to PGP sign your artifacts for the Sonatype repository. Don’t worry,
there’s a plugin for that. Follow the instructions for the plugin and you’ll have
PGP signed artifacts in no time.
If the command to generate your key fails execute the following commands and
remove the displayed files:
If your PGP key has not yet been distributed to the keyserver pool, i.e., you’ve
just generated it, you’ll need to publish it. You can do so using the sbt-pgp
plugin:
(where keyname is the name or email address used when creating the key or
hexadecimal identifier for the key.)
If you see no output from sbt-pgp then the key name specified was not found.
If it fails to run the SendKey command you can try another server (for example:
hkp://keyserver.ubuntu.com). A list of servers can be found at the status page
of sks-keyservers.net.
29
Second - Maven Publishing Settings
publishMavenStyle := true
is used to ensure POMs are generated and pushed. Next, you have to set up the
repositories you wish to push too. Luckily, Sonatype’s OSSRH uses the same
URLs for everyone:
publishTo := {
val nexus = "https://oss.sonatype.org/"
if (isSnapshot.value)
Some("snapshots" at nexus + "content/repositories/snapshots")
else
Some("releases" at nexus + "service/local/staging/deploy/maven2")
}
Another good idea is to not publish your test artifacts (this is the default):
Now, we want to control what’s available in the pom.xml file. This file describes
our project in the maven repository and is used by indexing services for search
and discover. This means it’s important that pom.xml should have all informa-
tion we wish to advertise as well as required info!
First, let’s make sure no repositories show up in the POM file. To publish
on maven-central, all required artifacts must also be hosted on maven central.
However, sometimes we have optional dependencies for special features. If that’s
the case, let’s remove the repositories for optional dependencies in our artifact:
Next, the POM metadata that isn’t generated by sbt must be added. This is
done through the pomExtra configuration option:
pomExtra := (
<url>http://jsuereth.com/scala-arm</url>
<licenses>
30
<license>
<name>BSD-style</name>
<url>http://www.opensource.org/licenses/bsd-license.php</url>
<distribution>repo</distribution>
</license>
</licenses>
<scm>
<url>git@github.com:jsuereth/scala-arm.git</url>
<connection>scm:git:git@github.com:jsuereth/scala-arm.git</connection>
</scm>
<developers>
<developer>
<id>jsuereth</id>
<name>Josh Suereth</name>
<url>http://jsuereth.com</url>
</developer>
</developers>)
Note that sbt will automatically inject licenses and url nodes if
they are already present in your build file. Thus an alternative to
the above pomExtra is to include the following entries:
homepage := Some(url("http://jsuereth.com/scala-arm"))
This might be advantageous if those keys are used also by other plugins (e.g.
ls). You cannot use both the sbt licenses key and the licenses section in
pomExtra at the same time, as this will produce duplicate entries in the final
POM file, leading to a rejection in Sonatype’s staging process.
The full format of a pom.xml file is outlined here.
The credentials for your Sonatype OSSRH account need to be added somewhere.
Common convention is a ~/.sbt/0.13/sonatype.sbt file with the following:
31
Note: The first two strings must be "Sonatype Nexus Repository
Manager" and "oss.sonatype.org" for Ivy to use the credentials.
Finally - Publish
In sbt, run publishSigned and you should see something like the following:
> publishSigned
Please enter your PGP passphrase> ***********
[info] Packaging /home/josh/projects/typesafe/scala-arm/target/scala-2.9.1/scala-arm_2.9.1-1.
[info] Wrote /home/josh/projects/typesafe/scala-arm/target/scala-2.9.1/scala-arm_2.9.1-1.2.po
[info] Packaging /home/josh/projects/typesafe/scala-arm/target/scala-2.9.1/scala-arm_2.9.1-1.
[info] Packaging /home/josh/projects/typesafe/scala-arm/target/scala-2.9.1/scala-arm_2.9.1-1.
[info] :: delivering :: com.jsuereth#scala-arm_2.9.1;1.2 :: 1.2 :: release :: Mon Jan 23 13:16:5
[info] Done packaging.
[info] Done packaging.
[info] Done packaging.
[info] delivering ivy file to /home/josh/projects/typesafe/scala-arm/target/scala-2.9.1/ivy-1
[info] published scala-arm_2.9.1 to https://oss.sonatype.org/service/local/staging/deploy/mav
[info] published scala-arm_2.9.1 to https://oss.sonatype.org/service/local/staging/deploy/mav
[info] published scala-arm_2.9.1 to https://oss.sonatype.org/service/local/staging/deploy/mav
[info] published scala-arm_2.9.1 to https://oss.sonatype.org/service/local/staging/deploy/mav
[info] published scala-arm_2.9.1 to https://oss.sonatype.org/service/local/staging/deploy/mav
[info] published scala-arm_2.9.1 to https://oss.sonatype.org/service/local/staging/deploy/mav
[info] published scala-arm_2.9.1 to https://oss.sonatype.org/service/local/staging/deploy/mav
[info] published scala-arm_2.9.1 to https://oss.sonatype.org/service/local/staging/deploy/mav
[success] Total time: 9 s, completed Jan 23, 2012 1:17:03 PM
After publishing you have to follow the Release workflow of nexus. sbt-sonatype
plugin allows the release workflow procedures to be performed directly from sbt.
Summary
To get your project hosted on Sonatype (and Maven Central), you will need to:
32
• Have GPG key pair, with published public key,
• An sbt file with your Sonatype credentials that is not pushed to the VCS,
• Add the sbt-pgp plugin to sign the artefacts,
• Modify build.sbt with the required elements in the generated POM.
Starting with a project that is not being published, you’ll need to install GPG,
generate and publish your key. Swtiching to sbt, you’ll then need to:
~/.sbt/sonatype.sbt This file (kept outside the VCS) contains the Sonatype
credentials settings:
build.sbt Finally, you’ll need to tweak the generated POM in your build.sbt.
The tweaks include specifying the project’s authors, URL, SCM and many oth-
ers:
publishTo := {
val nexus = "https://oss.sonatype.org/"
if (isSnapshot.value)
Some("snapshots" at nexus + "content/repositories/snapshots")
else
Some("releases" at nexus + "service/local/staging/deploy/maven2")
}
publishMavenStyle := true
pomExtra := (
<url>http://your.project.url</url>
<licenses>
33
<license>
<name>BSD-style</name>
<url>http://www.opensource.org/licenses/bsd-license.php</url>
<distribution>repo</distribution>
</license>
</licenses>
<scm>
<url>git@github.com:your-account/your-project.git</url>
<connection>scm:git:git@github.com:your-account/your-project.git</connection>
</scm>
<developers>
<developer>
<id>you</id>
<name>Your Name</name>
<url>http://your.url</url>
</developer>
</developers>
)
Changes
0.13.5-RC1 to 0.13.5-RC2
0.13.2 to 0.13.5
• The Scala version for sbt and sbt plugins is now 2.10.4. This is a compat-
ible version bump.
• Added a new setting testResultLogger to allow customisation of logging
of test results. (gh-1225)
• When test is run and there are no tests available, omit logging output.
Especially useful for aggregate modules. test-only et al unaffected. (gh-
1185)
• sbt now uses minor-patch version of ivy 2.4 (org.scala-sbt.ivy:ivy:2.4.0-sbt-
)
• sbt.Plugin deprecated in favor of sbt.AutoPlugin
• name-hashing incremental compiler now supports scala macros.
• testResultLogger is now configured.
• sbt-server hooks for task cancellation.
• Add JUnitXmlReportPlugin which generates junit-xml-reports for all
tests.
34
0.13.1 to 0.13.2
0.13.0 to 0.13.1
• The Scala version for sbt and sbt plugins is now 2.10.3. This is a compat-
ible version bump.
• New method toTask on Initialize[InputTask[T]] to apply the full
input and get a plain task out.
• Improved performance of inspect tree
• Work around various issues with Maven local repositories, including re-
solving -SNAPSHOTs from them. (gh-321)
• Better representation of no cross-version suffix in suffix conflict error mes-
sage: now shows <none> instead of just _
• TrapExit support for multiple, concurrent managed applications. Now
enabled by default for all run-like tasks. (gh-831)
35
• Add minimal support for class file formats 51.0, 52.0 in incremental com-
piler. (gh-842)
• Allow main class to be non-public. (gh-883)
• Convert -classpath to CLASSPATH when forking on Windows and length
exceeds a heuristic maximum. (gh-755)
• scalacOptions for .scala build definitions are now also used for .sbt
files
• error, warn, info, debug commands to set log level and --error, … to
set the level before the project is loaded. (gh-806)
• sLog settings that provides a Logger for use by settings. (gh-806)
• Early commands: any command prefixed with -- gets moved before other
commands on startup and doesn’t force sbt into batch mode.
• Deprecate internal -, --, and --- commands in favor of onFailure,
sbtClearOnFailure, and resumeFromFailure.
• makePom no longer generates <type> elements for standard classifiers. (gh-
728)
• Fix many instances of the Turkish i bug.
• Read https+ftp proxy environment variables into system properties where
Java will use them. (gh-886)
• The Process methods that are redirection-like no longer discard the exit
code of the input. This addresses an inconsistency with Fork, where using
the CustomOutput OutputStrategy makes the exit code always zero.
• Recover from failed reload command in the scripted sbt handler.
• Parse external pom.xml with CustomPomParser to handle multiple defini-
tions. (gh-758)
• Improve key collision error message (gh-877)
• Display the source position of an undefined setting.
• Respect the -nowarn option when compiling Scala sources.
• Improve forked test debugging by listing tests run by sbt in debug output.
(gh-868)
• Fix scaladoc cache to track changes to -doc-root-content (gh-837)
• Incremental compiler: Internal refactoring in preparation for name-
hashing (gh-936)
• Incremental compiler: improved cache loading/saving speed by internal
file names (gh-931)
• Docs: many contributed miscellaneous fixes and additions
• Docs: link to page source now at the bottom of the page
• Docs: sitemap now automatically generated
• Docs: custom role enables links from a key name in the docs to the val in
Keys
• Docs: restore sxr support and fix links to sxr’d sources. (gh-863)
36
0.12.4 to 0.13.0
The changes for 0.13.0 are listed on a separate page. See sbt 0.13.0 changes.
Features
37
• Support vals and defs in .sbt files. Details below.
• Support defining Projects in .sbt files: vals of type Project are added to
the Build. Details below.
• New syntax for settings, tasks, and input tasks. Details below.
• Support setting Scala home directory temporary using the switch com-
mand: ++ scala-version=/path/to/scala/home. The scala-version
part is optional, but is used as the version for any managed dependencies.
• Add publishM2 task for publishing to ~/.m2/repository. (gh-485)
• New API for getting tasks and settings from multiple projects and config-
urations. See the new section getting values from multiple scopes.
• Enhanced test interface for better support of test framework features. (De-
tails pending.)
• export command
Fixes
38
Improvements
• Run the API extraction phase after the compiler’s pickler phase instead
of typer to allow compiler plugins after typer. (Adriaan M., gh-609)
• Record defining source position of settings. inspect shows the definition
location of all settings contributing to a defined value.
• Allow the root project to be specified explicitly in Build.rootProject.
• Tasks that need a directory for storing cache information can now
use the cacheDirectory method on streams. This supersedes the
cacheDirectory setting.
• The environment variables used when forking run and test may be set
via envVars, which is a Task[Map[String,String]]. (gh-665)
• Restore class files after an unsuccessful compilation. This is useful when an
error occurs in a later incremental step that requires a fix in the originally
changed files.
• Better auto-generated IDs for default projects. (gh-554)
• Fork run directly with ‘java’ to avoid additional class loader from ‘scala’
command. (gh-702)
• Make autoCompilerPlugins support compiler plugins defined in a internal
dependency (only if exportJars := true due to scalac limitations)
• Track ancestors of non-private templates and use this information to re-
quire fewer, smaller intermediate incremental compilation steps.
• autoCompilerPlugins now supports compiler plugins defined in a
internal dependency. The plugin project must define exportJars
:= true. Depend on the plugin with ...dependsOn(... %
Configurations.CompilerPlugin).
• Add utilities for debugging API representation extracted by the incremen-
tal compiler. (Grzegorz K., gh-677, gh-793)
• consoleProject unifies the syntax for getting the value of a setting and
executing a task. See Console Project.
Other
• The source layout for the sbt project itself follows the package name to
accommodate to Eclipse users. (Grzegorz K., gh-613)
camelCase Key names The convention for key names is now camelCase
only instead of camelCase for Scala identifiers and hyphenated, lower-case on
the command line. camelCase is accepted for existing hyphenated key names
and the hyphenated form will still be accepted on the command line for those
existing tasks and settings declared with hyphenated names. Only camelCase
will be shown for tab completion, however.
39
New key definition methods There are new methods that help avoid du-
plicating key names by declaring keys as:
The name will be picked up from the val identifier by the implementation of the
taskKey macro so there is no reflection needed or runtime overhead. Note that
a description is mandatory and the method taskKey begins with a lowercase
t. Similar methods exist for keys for settings and input tasks: settingKey and
inputKey.
New task/setting syntax First, the old syntax is still supported with the
intention of allowing conversion to the new syntax at your leisure. There may
be some incompatibilities and some may be unavoidable, but please report any
issues you have with an existing build.
The new syntax is implemented by making :=, +=, and ++= macros and making
these the only required assignment methods. To refer to the value of other
settings or tasks, use the value method on settings and tasks. This method is
a stub that is removed at compile time by the macro, which will translate the
implementation of the task/setting to the old syntax.
For example, the following declares a dependency on scala-reflect using the
value of the scalaVersion setting:
The value method is only allowed within a call to :=, +=, or ++=. To construct
a setting or task outside of these methods, use Def.task or Def.setting. For
example,
libraryDependencies += reflectDep.value
myInputTask := {
// Define the parser, which is the standard space-delimited arguments parser.
val args = Def.spaceDelimited("<args>").parsed
// Demonstrates using a setting value and a task result:
40
println("Project name: " + name.value)
println("Classpath: " + (fullClasspath in Compile).value.map(_.file))
println("Arguments:")
for(arg <- args) println(" " + arg)
}
.sbt format enhancements vals and defs are now allowed in .sbt files. They
must follow the same rules as settings concerning blank lines, although multiple
definitions may be grouped together. For example,
val n = "widgets"
val o = "org.example"
name := n
organization := o
All definitions are compiled before settings, but it will probably be best practice
to put definitions together. Currently, the visibility of definitions is restricted
to the .sbt file it is defined in. They are not visible in consoleProject or the
set command at this time, either. Use Scala files in project/ for visibility in
all .sbt files.
vals of type Project are added to the Build so that multi-project builds can
be defined entirely in .sbt files now. For example,
Currently, it only makes sense to defines these in the root project’s .sbt files.
41
A shorthand for defining Projects is provided by a new macro called project.
This requires the constructed Project to be directly assigned to a val. The name
of this val is used for the project ID and base directory. The base directory can
be changed with the in method. The previous example can also be written as:
Control over automatically added settings sbt loads settings from a few
places in addition to the settings explicitly defined by the Project.settings
field. These include plugins, global settings, and .sbt files. The new
Project.autoSettings method configures these sources: whether to include
them for the project and in what order.
Project.autoSettings accepts a sequence of values of type AddSettings. In-
stances of AddSettings are constructed from methods in the AddSettings com-
panion object. The configurable settings are per-user settings (from ~/.sbt,
for example), settings from .sbt files, and plugin settings (project-level only).
The order in which these instances are provided to autoSettings determines
the order in which they are appended to the settings explicitly provided in
Project.settings.
For .sbt files, AddSettings.defaultSbtFiles adds the settings from all
.sbt files in the project’s base directory as usual. The alternative method
AddSettings.sbtFiles accepts a sequence of Files that will be loaded
according to the standard .sbt format. Relative files are resolved against the
project’s base directory.
Plugin settings may be included on a per-Plugin basis by using the
AddSettings.plugins method and passing a Plugin => Boolean. The
settings controlled here are only the automatic per-project settings. Per-build
and global settings will always be included. Settings that plugins require to be
manually added still need to be added manually.
For example,
import AddSettings._
42
lazy val sub = Project("sub", file("Sub")) autoSettings(
defaultSbtFiles, plugins(includePlugin)
)
sbt still needs access to the compiler and its dependencies in order to run
compile, console, and other Scala-based tasks. So, the Scala compiler jar and
dependencies (like scala-reflect.jar and scala-library.jar) are defined and resolved
in the scala-tool configuration (unless scalaHome is defined). By default, this
configuration and the dependencies in it are automatically added by sbt. This
occurs even when dependencies are configured in a pom.xml or ivy.xml and so
it means that the version of Scala defined for your project must be resolvable
by the resolvers configured for your project.
If you need to manually configure where sbt gets the Scala compiler and library
used for compilation, the REPL, and other Scala tasks, do one of the following:
43
sbt 0.12.0 Changes
• The cross versioning convention has changed for Scala versions 2.10 and
later as well as for sbt plugins.
• When invoked directly, ‘update’ will always perform an update (gh-335)
• The sbt plugins repository is added by default for plugins and plugin
definitions. gh-380
• Plugin configuration directory precedence has changed (see details section
below)
• Source dependencies have been fixed, but the fix required changes (see
details section below)
• Aggregation has changed to be more flexible (see details section below)
• Task axis syntax has changed from key(for task) to task::key (see details
section below)
• The organization for sbt has to changed to org.scala-sbt (was: org.scala-
tools.sbt). This affects users of the scripted plugin in particular.
• artifactName type has changed to (ScalaVersion, Artifact,
ModuleID) => String
• javacOptions is now a task
• session save overwrites settings in build.sbt (when appropriate). gh-
369
• scala-library.jar is now required to be on the classpath in order to compile
Scala code. See the scala-library.jar section at the bottom of the page for
details.
Features
Fixes
• Delete a symlink and not its contents when recursively deleting a directory.
44
• Fix detection of ancestors for java sources
• Fix the resolvers used for update-sbt-classifiers (gh-304)
• Fix auto-imports of plugins (gh-412)
• Argument quoting (see details section below)
• Properly reset JLine after being stopped by Ctrl+z (unix only). gh-394
Improvements
• The launcher can launch all released sbt versions back to 0.7.0.
• A more refined hint to run ‘last’ is given when a stack trace is suppressed.
• Use java 7 Redirect.INHERIT to inherit input stream of subprocess (gh-
462,gh-327). This should fix issues when forking interactive programs.
(@vigdorchik)
• Mirror ivy ‘force’ attribute (gh-361)
• Various improvements to help and tasks commands as well as new set-
tings command (gh-315)
• Bump jsch version to 0.1.46. (gh-403)
• Improved help commands: help, tasks, settings.
• Bump to JLine 1.0 (see details section below)
• Global repository setting (see details section below)
• Other fixes/improvements: gh-368, gh-377, gh-378, gh-386, gh-387, gh-
388, gh-389
Experimental or In-progress
45
1. Ideally, a project should ensure there is never a conflict. Both styles are
still supported; only the behavior when there is a conflict has changed.
2. In practice, switching from an older branch of a project to a new branch
would often leave an empty project/plugins/ directory that would cause
the old style to be used, despite there being no configuration there.
3. Therefore, the intention is that this change is strictly an improvement for
projects transitioning to the new style and isn’t noticed by other projects.
Parsing task axis There is an important change related to parsing the task
axis for settings and tasks that fixes gh-202
Aggregation Aggregation has been made more flexible. This is along the
direction that has been previously discussed on the mailing list.
1. Before 0.12, a setting was parsed according to the current project and only
the exact setting parsed was aggregated.
2. Also, tab completion did not account for aggregation.
3. This meant that if the setting/task didn’t exist on the current project,
parsing failed even if an aggregated project contained the setting/task.
4. Additionally, if compile:package existed for the current project, *:package
existed for an aggregated project, and the user requested ‘package’ to run
(without specifying the configuration), *:package wouldn’t be run on the
aggregated project (because it isn’t the same as the compile:package key
that existed on the current project).
5. In 0.12, both of these situations result in the aggregated settings being
selected. For example,
1. Consider a project root that aggregates a subproject sub.
2. root defines *:package.
3. sub defines compile:package and compile:compile.
4. Running root/package will run root/*:package and sub/compile:package
5. Running root/compile will run sub/compile:compile
6. This change was made possible in part by the change to task axis parsing.
46
Parallel Execution Fine control over parallel execution is supported as de-
scribed here: Parallel Execution.
1. The default behavior should be the same as before, including the paral-
lelExecution settings.
2. The new capabilities of the system should otherwise be considered exper-
imental.
3. Therefore, parallelExecution won’t be deprecated at this time.
1. The version of a plugin is fixed by the first build to load it. In particular,
the plugin version used in the root build (the one in which sbt is started
in) always overrides the version used in dependencies.
2. Plugins from all builds are loaded in the same class loader.
Additionally, Sanjin’s patches to add support for hg and svn URIs are included.
Cross building The cross version suffix is shortened to only include the major
and minor version for Scala versions starting with the 2.10 series and for sbt
versions starting with the 0.12 series. For example, sbinary_2.10 for a normal
library or sbt-plugin_2.10_0.12 for an sbt plugin. This requires forward and
backward binary compatibility across incremental releases for both Scala and
sbt.
1. This change has been a long time coming, but it requires everyone publish-
ing an open source project to switch to 0.12 to publish for 2.10 or adjust
the cross versioned prefix in their builds appropriately.
2. Obviously, using 0.12 to publish a library for 2.10 requires 0.12.0 to be
released before projects publish for 2.10.
3. There is now the concept of a binary version. This is a subset of the full
version string that represents binary compatibility. That is, equal binary
versions implies binary compatibility. All Scala versions prior to 2.10 use
the full version for the binary version to reflect previous sbt behavior. For
2.10 and later, the binary version is <major>.<minor>.
47
4. The cross version behavior for published artifacts is configured by the
crossVersion setting. It can be configured for dependencies by using the
cross method on ModuleID or by the traditional %% dependency con-
struction variant. By default, a dependency has cross versioning disabled
when constructed with a single % and uses the binary Scala version when
constructed with %%.
5. The artifactName function now accepts a type ScalaVersion as its first
argument instead of a String. The full type is now (ScalaVersion, Mod-
uleID, Artifact) => String. ScalaVersion contains both the full Scala
version (such as 2.10.0) as well as the binary Scala version (such as 2.10).
6. The flexible version mapping added by Indrajit has been merged into the
cross method and the %% variants accepting more than one argument
have been deprecated. See Cross Build for details.
test-quick test-quick (gh-393) runs the tests specified as arguments (or all
tests if no arguments are given) that:
scala-library.jar
48
code without scala-library as a dependency, for example, but this was a misfea-
ture. Instead, the Scala library should be declared as provided:
Older Changes
0.12.3 to 0.12.4
0.12.2 to 0.12.3
49
• Disable Ivy debug-level logging for performance. (gh-635)
• Invalidate artifacts not recorded in the original metadata when a module
marked as changing changes. (gh-637, gh-641)
• Ivy Artifact needs wildcard configuration added if no explicit ones are
defined. (gh-439)
• Right precedence of sbt.boot.properties lookup, handle qualifier correctly.
(gh-651)
• Mark the tests failed exception as having already provided feedback.
• Handle exceptions not caught by the test framework when forking. (gh-
653)
• Support reload plugins after ignoring a failure to load a project.
• Workaround for os deadlock detection at the process level. (gh-650)
• Fix for dependency on class file corresponding to a package. (Grzegorz
K., gh-620)
• Fix incremental compilation problem with package objects inheriting from
invalidated sources in a subpackage.
• Use Ivy’s default name for the resolution report so that links to other
configurations work.
• Include jars from java.ext.dirs in incremental classpath. (gh-678)
• Multi-line prompt text offset issue (Jibbers42, gh-625)
• Added xml:space="preserve" attribute to extraDependencyAttributes
XML Block for publishing poms for plugins dependent on other plugins
(Brendan M., gh-645)
• Tag the actual test task and not a later task. (gh-692)
• Make exclude-classifiers per-user instead of per-build. (gh-634)
• Load global plugins in their own class loader and replace the base loader
with that. (gh-272)
• Demote the default conflict warnings to the debug level. These will be
removed completely in 0.13. (gh-709)
• Fix Ivy cache issues when multiple resolvers are involved. (gh-704)
0.12.1 to 0.12.2
50
0
• Use java.lang.Throwable.setStackTrace when sending exceptions
back from forked tests. (Eugene V., gh-543)
• Don’t merge dependencies with mismatched transitive/force/changing val-
ues. (gh-582)
• Filter out null parent files when deleting empty directories. (Eugene V.,
gh-589)
• Work around File constructor not accepting URIs for UNC paths. (gh-
564)
• Split ForkTests react() out to workaround SI-6526 (avoids a stackoverflow
in some forked test situations)
• Maven-style ivy repo support in the launcher config (Eric B., gh-585)
• Compare external binaries with canonical files (nau, gh-584)
• Call System.exit after the main thread is finished. (Eugene V., gh-565)
• Abort running tests on the first failure to communicate results back to the
main process. (Eugene V., gh-557)
• Don’t let the right side of the alias command fail the parse. (gh-572)
• API extraction: handle any type that is annotated, not just the spec’d
simple type. (gh-559)
• Don’t try to look up the class file for a package. (gh-620)
0.12.0 to 0.12.1
• Merge multiple dependency definitions for the same ID. Workaround for
gh-468, gh-285, gh-419, gh-480.
• Don’t write section of pom if scope is ‘compile’.
• Ability to properly match on artifact type. Fixes gh-507 (Thomas).
• Force update to run on changes to last modified time of artifacts or cached
descriptor (part of fix for gh-532). It may also fix issues when working
with multiple local projects via ‘publish-local’ and binary dependencies.
• Per-project resolution cache that deletes cached files before update. Notes:
• The resolution cache differs from the repository cache and does
not contain dependency metadata or artifacts.
• The resolution cache contains the generated ivy files, properties,
and resolve reports for the project.
• There will no longer be individual files directly in ~/.ivy2/cache/
• Resolve reports are now in target/resolution-cache/reports/,
viewable with a browser.
• Cache location includes extra attributes so that cross builds of
a plugin do not overwrite each other. Fixes gh-532.
51
Three stage incremental compilation:
• As before, the first step recompiles sources that were edited (or otherwise
directly invalidated).
• The second step recompiles sources from the first step whose API has
changed, their direct dependencies, and sources forming a cycle with these
sources.
• The third step recompiles transitive dependencies of sources from the sec-
ond step whose API changed.
• Code relying mainly on composition should see decreased compilation
times with this approach.
• Code with deep inheritance hierarchies and large cycles between sources
may take longer to compile.
• last compile will show cycles that were processed in step 2. Reducing
large cycles of sources shown here may decrease compile times.
0.11.3 to 0.12.0
The changes for 0.12.0 are listed on a separate page. See sbt 0.12.0 changes.
52
0.11.2 to 0.11.3
Dropping scala-tools.org:
Other fixes:
0.11.1 to 0.11.2
• The local Maven repository has been removed from the launcher’s list of de-
fault repositories, which is used for obtaining sbt and Scala dependencies.
This is motivated by the high probability that including this repository
was causing the various problems some users have with the launcher not
finding some dependencies (gh-217).
Fixes:
53
• gh-257 Fix invalid classifiers in pom generation (Indrajit)
• gh-255 Fix scripted plugin descriptor (Artyom)
• Fix forking git on windows (Stefan, Josh)
• gh-261 Fix whitespace handling for semicolon-separated commands
• gh-263 Fix handling of dependencies with an explicit URL
• gh-272 Show deprecation message for project/plugins/
0.11.0 to 0.11.1
Breaking change:
• The scripted plugin is now in the sbt package so that it can be used from
a named package
• By default, there is more logging during update: one line per dependency
resolved and two lines per dependency downloaded. This is to address the
appearance that sbt hangs on larger ’update’s.
54
• gh-212 Fix transitive plugin dependencies.
• gh-222 Generate section in make-pom. (Jan)
• Build resolvers, loaders, and transformers.
• Allow project dependencies to be modified by a setting (buildDependen-
cies) but with the restriction that new builds cannot be introduced.
• gh-174, gh-196, gh-201, gh-204, gh-207, gh-208, gh-226, gh-224, gh-253
0.10.1 to 0.11.0
Major Improvements:
• Support using native libraries in run and test (but not console, for exam-
ple)
• Display all undefined settings at once, instead of only the first one
• Deprecate separate classpathFilter, defaultExcludes, and source-
Filter keys in favor of includeFilter and excludeFilter explicitly scoped
by unmanagedSources, unmanagedResources, or unmanagedJars as
appropriate (Indrajit)
• Default to using shared boot directory in ~/.sbt/boot/
• Can put contents of project/plugins/ directly in project/ instead. Will
likely deprecate plugins/ directory
• Key display is context sensitive. For example, in a single project, the build
and project axes will not be displayed
• gh-114, gh-118, gh-121, gh-132, gh-135, gh-157: Various settings and error
message improvements
• gh-115: Support configuring checksums separately for publish and
update
• gh-118: Add about command
55
• gh-118, gh-131: Improve last command. Aggregate last <task> and
display all recent output for last
• gh-120: Support read-only external file projects (Fred)
• gh-128: Add skip setting to override recompilation change detection
• gh-139: Improvements to pom generation (Indrajit)
• gh-140, gh-145: Add standard manifest attributes to binary and source
jars (Indrajit)
• Allow sources used for doc generation to be different from sources for
compile
• gh-156: Made package an alias for package-bin
• gh-162: handling of optional dependencies in pom generation
0.10.0 to 0.10.1
0.7.7 to 0.10.0
56
• New configuration system: See .sbt build example, .scala build defnition
and .sbt build definition.
• New task engine: Tasks
• New multiple project support: .scala build defnition
• More aggressive incremental recompilation for both Java and Scala sources
• Merged plugins and processors into improved plugins system: Plugins
• Web application and webstart support moved to plugins instead of core
features
• Fixed all of the issues in (Google Code) issue #44
• Managed dependencies automatically updated when configuration changes
• update-sbt-classifiers and update-classifiers tasks for retrieving
sources and/or javadocs for dependencies, transitively
• Improved [artifact handling and configuration][Attifacts]
• Tab completion parser combinators for commands and input tasks: Com-
mands
• No project creation prompts anymore
• Moved to GitHub: http://github.com/harrah/xsbt
0.7.5 to 0.7.7
0.7.4 to 0.7.5
57
0.7.3 to 0.7.4
• prefix continuous compilation with run number for better feedback when
logging level is ‘warn’
• Added pomIncludeRepository(repo: MavenRepository): Boolean
that can be overridden to exclude local repositories by default
• Added pomPostProcess(pom: Node): Node to make advanced manipu-
lation of the default pom easier (pomExtra already covers basic cases)
• Added reset command to reset JLine terminal. This needs to be run
after suspending and then resuming sbt.
• Installer plugin is now a proper subproject of sbt.
• Plugins can now only be Scala sources. BND should be usable in a plugin
now.
• More accurate detection of invalid test names. Invalid test names now
generate an error and prevent the test action from running instead of just
logging a warning.
• Fix issue with using 2.8.0.RC1 compiler in tests.
• Precompile compiler interface against 2.8.0.RC2
• Add consoleOptions for specifying options to the console. It defaults to
compileOptions.
• Properly support sftp/ssh repositories using key-based authentication. See
the updated section of the Resolvers page.
• def ivyUpdateLogging = UpdateLogging.DownloadOnly | Full |
Quiet. Default is DownloadOnly. Full will log metadata resolution and
provide a final summary.
• offline property for disabling checking for newer dynamic revisions (like
-SNAPSHOT). This allows working offline with remote snapshots. Not
honored for plugins yet.
• History commands: !!, !?string, !-n, !n, !string, !:n, !: Run
! to see help.
• New section in launcher configuration [ivy] with a single label cache-
directory. Specify this to change the cache location used by the launcher.
• New label classifiers under [app] to specify classifiers of additional
artifacts to retrieve for the application.
• Honor -Xfatal-warnings option added to compiler in 2.8.0.RC2.
• Make scaladocTask a fileTask so that it runs only when index.html is
older than some input source.
• Made it easier to create default test-* tasks with different options
• Sort input source files for consistency, addressing scalac’s issues with
source file ordering.
• Derive Java source file from name of class file when no SourceFile attribute
is present in the class file. Improves tracking when -g:none option is used.
• Fix FileUtilities.unzip to be tail-recursive again.
58
0.7.2 to 0.7.3
0.7.1 to 0.7.2
59
and will not be packaged by package. For example, to exclude the GAE
datastore directory:
0.7.0 to 0.7.1
0.5.6 to 0.7.0
• Unifed batch and interactive commands. All commands that can be exe-
cuted at interactive prompt can be run from the command line. To run
commands and then enter interactive prompt, make the last command
‘shell’.
• Properly track certain types of synthetic classes, such as for comprehension
with >30 clauses, during compilation.
• Jetty 7 support
• Allow launcher in the project root directory or the lib directory. The jar
name must have the form ‘sbt-launch.jar’ in order to be excluded from the
classpath.
• Stack trace detail can be controlled with 'on', 'off', ‘nosbt’, or an integer
level. ‘nosbt’ means to show stack frames up to the first sbt method. An
integer level denotes the number of frames to show for each cause. This
feature is courtesty of Tony Sloane.
60
• New action ‘test-run’ method that is analogous to ‘run’, but for test
classes.
• New action ‘clean-plugins’ task that clears built plugins (useful for plugin
development).
• Can provide commands from a file with new command: <filename
• Can provide commands over loopback interface with new command: <port
• Scala version handling has been completely redone.
• The version of Scala used to run sbt (currently 2.7.7) is decoupled from
the version used to build the project.
• Changing between Scala versions on the fly is done with the command:
++<version>
• Cross-building is quicker. The project definition does not need to be re-
compiled against each version in the cross-build anymore.
• Scala versions are specified in a space-delimited list in the build.scala.versions
property.
• Dependency management:
• make-pom task now uses custom pom generation code instead of Ivy’s pom
writer.
• Basic support for writing out Maven-style repositories to the pom
• Override the ‘pomExtra’ method to provide XML (scala.xml.NodeSeq)
to insert directly into the generated pom.
• Complete control over repositories is now possible by overriding ivyRepos-
itories.
• The interface to Ivy can be used directly.
• Test framework support is now done through a uniform test interface.
Implications:
• New versions of specs, ScalaCheck, and ScalaTest are supported as soon
as they are released.
• Support is better, since the test framework authors provide the implemen-
tation.
• Arguments can be passed to the test framework. For example: {{{ >
test-only your.test – -a -b -c }}}
• Can provide custom task start and end delimiters by defining the system
properties sbt.start.delimiter and sbt.end.delimiter.
• Revamped launcher that can launch Scala applications, not just sbt
• Provide a configuration file to the launcher and it can download the ap-
plication and its dependencies from a repository and run it.
• sbt’s configuration can be customized. For example,
• The sbt version to use in projects can be fixed, instead of read from
project/build.properties.
• The default values used to create a new project can be changed.
• The repositories used to fetch sbt and its dependencies, including Scala,
can be configured.
61
• The location sbt is retrieved to is configurable. For example,
/home/user/.ivy2/sbt/ could be used instead of project/boot/.
0.5.5 to 0.5.6
0.5.4 to 0.5.5
0.5.2 to 0.5.4
• Many logging related changes and fixes. Added FilterLogger and cleaned
up interaction between Logger, scripted testing, and the builder projects.
This included removing the recordingDepth hack from Logger. Logger
buffering is now enabled/disabled per thread.
62
• Allow multiple instances of Jetty (new jettyRunTasks can be defined with
different ports)
• jettyRunTask accepts configuration in a single configuration wrapper ob-
ject instead of many parameters
• Fix web application class loading (issue #35) by using jettyClasspath=testClasspath—
jettyRunClasspath for loading Jetty. A better way would be to have a
jetty configuration and have jettyClasspath=managedClasspath(‘jetty’),
but this maintains compatibility.
• Copy resources to target/resources and target/test-resources us-
ing copyResources and copyTestResources tasks. Properly include all re-
sources in web applications and classpaths (issue #36). mainResources
and testResources are now the definitive methods for getting resources.
• Updated for 2.8 (sbt now compiles against September 11, 2009 nightly
build of Scala)
• Fixed issue with position of ˆ in compile errors
• Changed order of repositories (local, shared, Maven Central, user, Scala
Tools)
• Added Maven Central to resolvers used to find Scala library/compiler in
launcher
• Fixed problem that prevented detecting user-specified subclasses
• Fixed exit code returned when exception thrown in main thread for
TrapExit
• Added javap task to DefaultProject. It has tab completion on compiled
project classes and the run classpath is passed to javap so that library
classes are available. Examples: :
• Added exec task. Mixin Exec to project definition to use. This forks the
command following exec. Examples: :
• Added sh task for users with a unix-style shell available (runs /bin/sh
-c <arguments>). Mixin Exec to project definition to use. Example: :
63
• Proper dependency graph actions (previously was an unsupported proto-
type): graph-src and graph-pkg for source dependency graph and quasi-
package dependency graph (based on source directories and source depen-
dencies)
To specify in a dependency: :
0.5.1 to 0.5.2
• Fixed problem where dependencies of sbt plugins were not on the compile
classpath
• Added execTask that runs an sbt.ProcessBuilder when invoked
64
• Can define and use an sbt test framework extension in a project
• Fixed run action swallowing exceptions
• Fixed tab completion for method tasks for multi-project builds
• Check that tasks in compoundTask do not reference static tasks
• Make toString of Paths in subprojects relative to root project directory
• crossScalaVersions is now inherited from parent if not specified
• Added scala-library.jar to the javac classpath
• Project dependencies are added to published ivy.xml
• Added dependency tracking for Java sources using classfile parsing (with
the usual limitations)
• Added Process.cat that will send contents of URLs and Files to standard
output. Alternatively, cat can be used on a single URL or File. Example:
:
import java.net.URL
import java.io.File
val spde = new URL("http://technically.us/spde/About")
val dispatch = new URL("http://databinder.net/dispatch/About")
val build = new File("project/build.properties")
cat(spde, dispatch, build) #| "grep -i scala" !
0.4.6 to 0.5/0.5.1
65
• Dependency management and multiple Scala versions
• Experimental plugin for producing project bootstrapper in a self-
extracting jar
• Added ability to directly specify URL to use for dependency with the
from(url: URL) method defined on ModuleID
• Fixed issue #30
• Support cross-building with + when running batch actions
• Additional flattening for project definitions: sources can go either in
project/build/src (recursively) or project/build (flat)
• Fixed manual reboot not changing the version of Scala when it is manually
set
• Fixed tab completion for cross-building
• Fixed a class loading issue with web applications
0.4.5 to 0.4.6
66
• Parallelization at the subtask level
• Parallel test execution at the suite/specification level.
0.4.3 to 0.4.5
67
0.4 to 0.4.3
• Direct dependencies on Scala libraries are checked for version equality with
scala.version
• Transitive dependencies on scala-library and scala-compiler are fil-
tered
• They are fixed by scala.version and provided on the classpath by sbt
• To access them, use the scalaJars method, classOf[ScalaObject].getProtectionDomain.getCodeSource,
or mainCompileConditional.analysis.allExternals
• The configurations checked/filtered as described above are configurable.
Nonstandard configurations are not checked by default.
• Version of sbt and Scala printed on startup
• Launcher asks if you want to try a different version if sbt or Scala could
not be retrieved.
• After changing scala.version or sbt.version with set, note is printed
that reboot is required.
• Moved managed dependency actions to BasicManagedProject (update is
now available on ParentProject)
• Cleaned up sbt’s build so that you just need to do update and full-build
to build from source. The trunk version of sbt will be available for use
from the loader.
• The loader is now a subproject.
• For development, you’ll still want the usual actions (such as package) for
the main builder and proguard to build the loader.
• Fixed analysis plugin improperly including traits/abstract classes in sub-
class search
• ScalaProjects already had everything required to be parent projects:
flipped the switch to enable it
• Proper method task support in scripted tests (package group tests rightly
pass again)
• Improved tests in loader that check that all necessary libraries were down-
loaded properly
0.3.7 to 0.4
68
• Scripted tests now test the version of sbt being built instead of the version
doing the building.
• testResources is put on the test classpath instead of testResourcesPath
• Added jetty-restart, which does jetty-stop and then jetty-run
• Added automatic reloading of default web application
• Changed packaging behaviors (still likely to change)
• Inline configurations now allowed (can be used with configurations in inline
XML)
• Split out some code related to managed dependencies from BasicScalaPro-
ject to new class BasicManagedProject
• Can specify that maven-like configurations should be automatically de-
clared
• Fixed problem with nested modules being detected as tests
• testResources, integrationTestResources, and mainResources should
now be added to appropriate classpaths
• Added project organization as a property that defaults to inheriting from
the parent project.
• Project creation now prompts for the organization.
• Added method tasks, which are top-level actions with parameters.
• Made help, actions, and methods commands available to batch-style
invocation.
• Applied Mikko’s two fixes for webstart and fixed problem with
pack200+sign. Also, fixed nonstandard behavior when gzip enabled.
• Added control method to Logger for action lifecycle logging
• Made standard logging level convenience methods final
• Made BufferedLogger have a per-actor buffer instead of a global buffer
• Added a SynchronizedLogger and a MultiLogger (intended to be used
with the yet unwritten FileLogger)
• Changed method of atomic logging to be a method logAll accepting
List[LogEvent] instead of doSynchronized
• Improved action lifecycle logging
• Parallel logging now provides immediate feedback about starting an action
• General cleanup, including removing unused classes and methods and re-
ducing dependencies between classes
• run is now a method task that accepts options to pass to the main method
(runOptions has been removed, runTask is no longer interactive, and run
no longer starts a console if mainClass is undefined)
• Major task execution changes:
• Tasks automatically have implicit dependencies on tasks with the same
name in dependent projects
• Implicit dependencies on interactive tasks are ignored, explicit dependen-
cies produce an error
• Interactive tasks must be executed directly on the project on which they
69
are defined
• Method tasks accept input arguments (Array[String]) and dynamically
create the task to run
• Tasks can depend on tasks in other projects
• Tasks are run in parallel breadth-first style
• Added test-only method task, which restricts the tests to run to only
those passed as arguments.
• Added test-failed method task, which restricts the tests to run. First,
only tests passed as arguments are run. If no tests are passed, no filtering
is done. Then, only tests that failed the previous run are run.
• Added test-quick method task, which restricts the tests to run. First,
only tests passed as arguments are run. If no tests are passed, no filtering
is done. Then, only tests that failed the previous run or had a dependency
change are run.
• Added launcher that allows declaring version of sbt/scala to build project
with.
• Added tab completion with ~
• Added basic tab completion for method tasks, including test-*
• Changed default pack options to be the default options of Pack200.Packer
• Fixed ~ behavior when action doesn’t exist
0.3.6 to 0.3.7
70
0.3.5 to 0.3.6
0.3.2 to 0.3.5
71
• Added help action to tab completion
• Added handling of (effectively empty) scala source files that create no class
files: they are always interpreted as modified.
• Added prompt to retry project loading if compilation fails
• package action now uses fileTask so that it only executes if files are out
of date
• fixed ScalaTest framework wrapper so that it fails the test action if tests
fail
• Inline dependencies can now specify configurations
0.3.1 to 0.3.2
0.3 to 0.3.1
0.2.3 to 0.3
0.2.2 to 0.2.3
72
0.2.1 to 0.2.2
0.2.0 to 0.2.1
0.1.9 to 0.2.0
0.1.8 to 0.1.9
73
• Split compilation into separate main and test compilations.
• A failure in a ScalaTest run now fails the test action.
• Implemented reporters for compile/scaladoc, ScalaTest, ScalaCheck,
and specs that delegate to the appropriate sbt.Logger.
0.1.7 to 0.1.8
0.1.6 to 0.1.7
• Added graph action to generate dot files (for graphiz) from dependency
information (work in progress).
• Options are now passed to tasks as varargs.
• Redesigned Path properly, including PathFinder returning a Set[Path]
now instead of Iterable[Path].
• Moved paths out of ScalaProject and into BasicProjectPaths to keep
path definitions separate from task definitions.
• Added initial support for managing third-party libraries through the up-
date task, which must be explicitly called (it is not a dependency of com-
pile or any other task). This is experimental, undocumented, and known
to be incomplete.
• Parallel execution implementation at the project level, disabled by default.
To enable, add: scala override def parallelExecution = true to your project
definition. In order for logging to make sense, all project logging is buffered
until the project is finished executing. Still to be done is some sort of
notification of project execution (which ones are currently executing, how
many remain)
• run and console are now specified as “interactive” actions, which means
they are only executed on the project in which they are defined when
called directly, and not on all dependencies. Their dependencies are still
run on dependent projects.
• Generalized conditional tasks a bit. Of note is that analysis is no longer
required to be in metadata/analysis, but is now in target/analysis by
default.
• Message now displayed when project definition is recompiled on startup
• Project no longer inherits from Logger, but now has a log member.
• Dependencies passed to project are checked for null (may help with errors
related to initialization/circular dependencies)
74
• Task dependencies are checked for null
• Projects in a multi-project configuration are checked to ensure that output
paths are different (check can be disabled)
• Made update task globally synchronized because Ivy is not thread-safe.
• Generalized test framework, directly invoking frameworks now (used re-
flection before).
• Moved license files to licenses/
• Added support for specs and some support for ScalaTest (the test action
doesn’t fail if ScalaTest tests fail).
• Added specs, ScalaCheck, ScalaTest jars to lib/
• These are now required for compilation, but are optional at runtime.
• Added the appropriate licenses and notices.
• Options for update action are now taken from updateOptions member.
• Fixed SbtManager inline dependency manager to work properly.
• Improved Ivy configuration handling (not compiled with test dependencies
yet though).
• Added case class implementation of SbtManager called SimpleManager.
• Project definitions not specifying dependencies can now use just a single
argument constructor.
0.1.5 to 0.1.6
• run and console handle System.exit and multiple threads in user code
under certain circumstances (see running project code).
0.1.4 to 0.1.5
75
• Changes in a project propagate the right source recompilations in depen-
dent projects
• Consequences:
• Recompilation when changing java/scala version
• Recompilation when upgrading libraries (again, as indicated in the second
point, situations where you have library-1.0.jar and library-2.0.jar on the
classpath at the same time are not handled predictably. Replacing library-
1.0.jar with library-2.0.jar should work as expected.)
• Changing sbt version will recompile project definitions
0.1.3 to 0.1.4
0.1.2 to 0.1.3
0.1.1 to 0.1.2
0.1 to 0.1.1
The assumption here is that you are familiar with sbt 0.7 but new to sbt 0.13.5.
sbt 0.13.5’s many new capabilities can be a bit overwhelming, but this page
should help you migrate to 0.13.5 with a minimum of fuss.
76
Why move to 0.13.5?
Step 1: Read the Getting Started Guide for sbt 0.13.5 Reading the
Getting Started Guide will probably save you a lot of confusion.
Step 2: Install sbt 0.13.5 Download sbt 0.13.5 as described on the setup
page.
You can run 0.13.5 the same way that you run 0.7.x, either simply:
Or (as most users do) with a shell script, as described on the setup page.
For more details see the setup page.
Rename your project/ directory to something like project-old. This will hide
it from sbt 0.13.5 but keep it in case you want to switch back to 0.7.x.
Create a build.sbt file in the root directory of your project. See .sbt build
definition in the Getting Started Guide, and for simple examples. If you have a
simple project then converting your existing project file to this format is largely
a matter of re-writing your dependencies and maven archive declarations in a
modified yet familiar syntax.
77
This build.sbt file combines aspects of the old project/build/ProjectName.scala
and build.properties files. It looks like a property file, yet contains Scala
code in a special format.
A build.properties file like:
#Project properties
#Fri Jan 07 15:34:00 GMT 2011
project.organization=org.myproject
project.name=My Project
sbt.version=0.7.7
project.version=1.0
def.scala.version=2.7.7
build.scala.versions=2.8.1
project.initialize=false
version := "1.0"
organization := "org.myproject"
scalaVersion := "2.9.2"
Now launch sbt. If you’re lucky it works and you’re done. For help debugging,
see below.
If you get stuck and want to switch back, you can leave your build.sbt file
alone. sbt 0.7.x will not understand or notice it. Just rename your 0.13.5
project directory to something like project10 and rename the backup of your
old project from project-old to project again.
FAQs There’s a section in the FAQ about migration from 0.7 that covers
several other important points.
78
Contributing to sbt
Below is a running list of potential areas of contribution. This list may become
out of date quickly, so you may want to check on the sbt-dev mailing list if you
are interested in a specific topic.
79
warn test:run
Also, trace is currently an integer, but should really be an abstract data type.
7. Each sbt version has more aggressive incremental compilation and reproduc-
ing bugs can be difficult. It would be helpful to have a mode that generates a
diff between successive compilations and records the options passed to scalac.
This could be replayed or inspected to try to find the cause.
Documentation
1. There’s a lot to do with this documentation. If you check it out from git,
there’s a directory called Dormant with some content that needs going
through.
2. the main page mentions external project references (e.g. to a git repo) but
doesn’t have anything to link to that explains how to use those.
3. API docs are much needed.
4. Find useful answers or types/methods/values in the other docs, and pull
references to them up into /faq or /Name-Index so people can find the
docs. In general the /faq should feel a bit more like a bunch of pointers
into the regular docs, rather than an alternative to the docs.
5. A lot of the pages could probably have better names, and/or little 2-4
word blurbs to the right of them in the sidebar.
Detailed Topics
This part of the documentation has pages documenting particular sbt topics in
detail. Before reading anything in here, you will need the information in the
Getting Started Guide as a foundation.
Other resources include the How to and Developer’s Guide sections in this ref-
erence, and the API Documentation
Using sbt
This part of the documentation has pages documenting particular sbt topics in
detail. Before reading anything in here, you will need the information in the
Getting Started Guide as a foundation.
80
Running in the Getting Started Guide for an intro to the basics, while this page
has a lot more detail.
Project-level tasks
Configuration-level tasks
81
• consoleQuick Starts the Scala interpreter with the project’s compile-time
dependencies on the classpath. test:consoleQuick uses the test dependen-
cies. This task differs from console in that it does not force compilation
of the current project’s sources.
• consoleProject Enters an interactive session with sbt and the build defi-
nition on the classpath. The build definition and related values are bound
to variables and common packages and values are imported. See the con-
soleProject documentation for more information.
• doc Generates API documentation for Scala source files in src/main/scala
using scaladoc. test:doc generates API documentation for source files
in src/test/scala.
• package Creates a jar file containing the files in src/main/resources and
the classes compiled from src/main/scala. test:package creates a jar
containing the files in src/test/resources and the class compiled from
src/test/scala.
• packageDoc Creates a jar file containing API documentation generated
from Scala source files in src/main/scala. test:packageDoc creates a jar
containing API documentation for test sources files in src/test/scala.
• packageSrc: Creates a jar file containing all main source files and
resources. The packaged paths are relative to src/main/scala and
src/main/resources. Similarly, test:packageSrc operates on test source
files and resources.
• run <argument>* Runs the main class for the project in the same virtual
machine as sbt. The main class is passed the arguments provided. Please
see Running Project Code for details on the use of System.exit and multi-
threading (including GUIs) in code run by this action. test:run runs a
main class in the test code.
• runMain <main-class> <argument>* Runs the specified main class for
the project in the same virtual machine as sbt. The main class is passed
the arguments provided. Please see Running Project Code for details on
the use of System.exit and multithreading (including GUIs) in code run
by this action. test:runMain runs the specified main class in the test
code.
• test Runs all tests detected during test compilation. See Testing for
details.
• testOnly <test>* Runs the tests provided as arguments. * (will be)
interpreted as a wildcard in the test name. See Testing for details.
• testQuick <test>* Runs the tests specified as arguments (or all tests if
no arguments are given) that:
1. have not been run yet OR
2. failed the last time they were run OR
3. had any transitive dependencies recompiled since the last successful
run * (will be) interpreted as a wildcard in the test name. See Testing
for details.
82
General commands
• < filename Executes the commands in the given file. Each command
should be on its own line. Empty lines and lines beginning with ‘#’ are
ignored
• + <command> Executes the project specified action or method for all ver-
sions of Scala defined in the crossScalaVersions setting.
• ++ <version|home-directory> <command> Temporarily changes the ver-
sion of Scala building the project and executes the provided command.
<command> is optional. The specified version of Scala is used until the
project is reloaded, settings are modified (such as by the set or session
commands), or ++ is run again. <version> does not need to be listed in
the build definition, but it must be available in a repository. Alternatively,
specify the path to a Scala installation.
83
Commands for managing the build definition
84
sbt.ivy.home
Directory
~/.ivy2
The directory containing the local Ivy repository and artifact cache
sbt.boot.directory
Directory
~/.sbt/boot
Path to shared boot directory
sbt.main.class
String
xsbt.inc.debug
Boolean
false
sbt.extraClasspath
Classpath Entries
(jar files or directories) that are added to sbt’s classpath. Note that the entries
are deliminted by comma, e.g.: entry1, entry2,... See also resource in the sbt
launcher documentation.
sbt.version
Version
0.13.5
sbt version to use, usually taken from project/build.properties.
sbt.boot.properties
File
The path to find the sbt boot properties file. This can be a relative path, relative
to the sbt base directory, the users home directory or the location of the sbt jar
file, or it can be an absolute path or an absolute file URI.
sbt.override.build.repos
Boolean
false
If true, repositories configured in a build definition are ignored and the reposito-
ries configured for the launcher are used instead. See sbt.repository.config and
the sbt launcher documentation.
85
sbt.repository.config
File
~/.sbt/repositories
A file containing the repositories to use for the launcher. The format is the same
as a [repositories] section for a sbt launcher configuration file. This setting is
typically used in conjuction with setting sbt.override.build.repos to true (see
previous row and the sbt launcher documentation).
Console Project
Description
The consoleProject task starts the Scala interpreter with access to your project
definition and to sbt. Specifically, the interpreter is started up with these
commands already executed:
import sbt._
import Process._
import Keys._
import <your-project-definition>._
import currentState._
import extracted._
import cpHelpers._
For example, running external processes with sbt’s process library (to be in-
cluded in the standard library in Scala 2.9):
consoleProject can be useful for creating and modifying your build in the
same way that the Scala interpreter is normally used to explore writing code.
Note that this gives you raw access to your build. Think about what you pass
to IO.delete, for example.
Accessing settings
86
Examples
Evaluating tasks
State
> remainingCommands
> definedCommands.size
87
Cross-building
Introduction
Publishing Conventions
To use a library built against multiple versions of Scala, double the first % in an
inline dependency to be %%. This tells sbt that it should append the current
version of Scala being used to build the library to the dependency’s name. For
example:
88
Cross-Building a Project
> + package
> + publish
you make your project available to users for different versions of Scala. See
Publishing for more details on publishing your project.
In order to make this process as quick as possible, different output and managed
dependency directories are used for different versions of Scala. For example,
when building against Scala 2.10.0,
89
These are equivalent:
This overrides the defaults to always use the full Scala version instead of the
binary Scala version:
This uses a custom function to determine the Scala version to use based on the
binary Scala version:
This uses a custom function to determine the Scala version to use based on the
full Scala version:
Central to sbt is the new configuration system, which is designed to enable exten-
sive customization. The goal of this page is to explain the general model behind
the configuration system and how to work with it. The Getting Started Guide
(see .sbt files) describes how to define settings; this page describes interacting
with them and exploring them at the command line.
90
Selecting commands, tasks, and settings
{<build-uri>}<project-id>/config:inkey::key
This “scoped key” reference is used by commands like last and inspect and
when selecting a task to run. Only key is usually required by the parser; the
remaining optional pieces select the scope. These optional pieces are individ-
ually referred to as scope axes. In the above description, {<build-uri>} and
<project-id>/ specify the project axis, config: is the configuration axis, and
inkey is the task-specific axis. Unspecified components are taken to be the cur-
rent project (project axis) or auto-detected (configuration and task axes). An
asterisk (*) is used to explicitly refer to the Global context, as in */*:key.
> compile
> compile:compile
> root/compile
> root/compile:compile
> {file:/home/user/sample/}root/compile:compile
> test:consoleQuick
> test:console
> test:doc
> test:package
91
Task-specific Settings Some settings are defined per-task. This is used when
there are several related tasks, such as package, packageSrc, and packageDoc,
in the same configuration (such as compile or test). For package tasks, their
settings are the files to package, the options to use, and the output file to
produce. Each package task should be able to have different values for these
settings.
This is done with the task axis, which selects the task to apply a setting to. For
example, the following prints the output jar for the different package tasks.
> package::artifactPath
[info] /home/user/sample/target/scala-2.8.1.final/demo_2.8.1-0.1.jar
> packageSrc::artifactPath
[info] /home/user/sample/target/scala-2.8.1.final/demo_2.8.1-0.1-src.jar
> packageDoc::artifactPath
[info] /home/user/sample/target/scala-2.8.1.final/demo_2.8.1-0.1-doc.jar
> test:package::artifactPath
[info] /home/user/sample/target/scala-2.8.1.final/root_2.8.1-0.1-test.jar
Note that a single colon : follows a configuration axis and a double colon ::
follows a task axis.
This section discusses the inspect command, which is useful for exploring re-
lationships between settings. It can be used to determine which setting should
be modified in order to affect another setting, for example.
This shows that libraryDependencies has been defined on the current project
({file:/home/user/sample/}root) in the global configuration (*:). For a task
like update, the output looks like:
92
> inspect update
[info] Task: sbt.UpdateReport
[info] Provided by:
[info] {file:/home/user/sample/}root/*:update
...
Related Settings The “Related” section of inspect output lists all of the
definitions of a key. For example,
Dependencies Forward dependencies show the other settings (or tasks) used
to define a setting (or task). Reverse dependencies go the other direction, show-
ing what uses a given setting. inspect provides this information based on either
the requested dependencies or the actual dependencies. Requested dependen-
cies are those that a setting directly specifies. Actual settings are what those
dependencies get resolved to. This distinction is explained in more detail in the
following sections.
...
This shows the inputs to the console task. We can see that it gets its classpath
and options from fullClasspath and scalacOptions(for console). The in-
formation provided by the inspect command can thus assist in finding the right
93
setting to change. The convention for keys, like console and fullClasspath,
is that the Scala identifier is camel case, while the String representation is
lowercase and separated by dashes. The Scala identifier for a configuration is
uppercase to distinguish it from tasks like compile and test. For example, we
can infer from the previous example how to add code to be run when the Scala
interpreter starts up:
For initialCommands, we see that it comes from the global scope (*/*:). Com-
bining this with the relevant output from inspect console:
compile:console::initialCommands
94
we know that we can set initialCommands as generally as the global scope,
as specific as the current project’s console task scope, or anything in between.
This means that we can, for example, set initialCommands for the whole project
and will affect console:
The reason we might want to set it here this is that other console tasks will use
this value now. We can see which ones use our new setting by looking at the
reverse dependencies output of inspect actual:
or configuration axis:
The next part describes the Delegates section, which shows the chain of delega-
tion for scopes.
Delegates A setting has a key and a scope. A request for a key in a scope A
may be delegated to another scope if A doesn’t define a value for the key. The
delegation chain is well-defined and is displayed in the Delegates section of the
inspect command. The Delegates section shows the order in which scopes are
searched when a value is not defined for the requested key.
As an example, consider the initial commands for console again:
95
> inspect console::initialCommands
...
[info] Delegates:
[info] *:console::initialCommands
[info] *:initialCommands
[info] {.}/*:console::initialCommands
[info] {.}/*:initialCommands
[info] */*:console::initialCommands
[info] */*:initialCommands
...
Triggered Execution
You can make a command run when certain files change by prefixing the com-
mand with ~. Monitoring is terminated when enter is pressed. This triggered
execution is configured by the watch setting, but typically the basic settings
watchSources and pollInterval are modified.
• watchSources defines the files for a single project that are monitored
for changes. By default, a project watches resources and Scala and Java
sources.
• watchTransitiveSources then combines the watchSources for the cur-
rent project and all execution and classpath dependencies (see .scala build
definition for details on interProject dependencies).
• pollInterval selects the interval between polling for changes in millisec-
onds. The default value is 500 ms.
Compile
> ~ test:compile
> ~ compile
96
Testing
You can use the triggered execution feature to run any command or task. One
use is for test driven development, as suggested by Erick on the mailing list.
The following will poll for changes to your source code (main or test) and run
testOnly for the specified test.
Occasionally, you may need to trigger the execution of multiple commands. You
can use semicolons to separate the commands to be triggered.
The following will poll for source changes and run clean and test.
sbt has two alternative entry points that may be used to:
Setup
To set up these entry points, you can either use conscript or manually construct
the startup scripts. In addition, there is a setup script for the script mode that
only requires a JRE installed.
97
Manual Setup Duplicate your standard sbt script, which was set up accord-
ing to Setup, as scalas and screpl (or whatever names you like).
scalas is the script runner and should use sbt.ScriptMain as the main class,
by adding the -Dsbt.main.class=sbt.ScriptMain parameter to the java com-
mand. Its command line should look like:
For the REPL runner screpl, use sbt.ConsoleMain as the main class:
Usage
sbt Script runner The script runner can run a standard Scala script, but
with the additional ability to configure sbt. sbt settings may be embedded in
the script in a comment block that opens with /***.
Example Copy the following script and make it executable. You may
need to adjust the first line depending on your script name and operating
system. When run, the example should retrieve Scala, the required depen-
dencies, compile the script, and run it directly. For example, if you name it
dispatch_example.scala, you would do on Unix:
#!/usr/bin/env scalas
!#
/***
scalaVersion := "2.9.0-1"
98
import dispatch.{ json, Http, Request }
import dispatch.twitter.Search
import json.{ Js, JsObject }
sbt REPL with dependencies The arguments to the REPL mode configure
the dependencies to use when starting up the REPL. An argument may be either
a jar to include on the classpath, a dependency definition to retrieve and put
on the classpath, or a resolver to use when retrieving dependencies.
A dependency definition looks like:
organization%module%revision
organization%%module%revision
"id at url"
Example: To add the Sonatype snapshots repository and add Scalaz 7.0-
SNAPSHOT to REPL classpath:
This syntax was a quick hack. Feel free to improve it. The relevant class is
IvyConsole.
99
Understanding Incremental Recompilation
Compiling Scala code is slow, and sbt makes it often faster. By understanding
how, you can even understand how to make compilation even faster. Modifying
source files with many dependencies might require recompiling only those source
files—which might take, say, 5 seconds—instead of all the dependencies—which
might take, say, 2 minutes. Often you can control which will be your case and
make development much faster by some simple coding practices.
In fact, improving Scala compilation times is one major goal of sbt, and con-
versely the speedups it gives are one of the major motivations to use it. A
significant portion of sbt sources and development efforts deals with strategies
for speeding up compilation.
To reduce compile times, sbt uses two strategies:
By organizing your source code appropriately, you can minimize the amount of
code affected by a change. sbt cannot determine precisely which dependencies
have to be recompiled; the goal is to compute a conservative approximation, so
that whenever a file must be recompiled, it will, even though we might recompile
extra files.
100
sbt heuristics
sbt tracks source dependencies at the granularity of source files. For each source
file, sbt tracks files which depend on it directly; if the interface of classes,
objects or traits in a file changes, all files dependent on that source must be
recompiled. At the moment sbt uses the following algorithm to calculate source
files dependent on a given source file:
//A.scala
class A {
def foo: Int = 123
}
//B.scala
class B extends A
//C.scala
class C extends B
//D.scala
class D(a: A)
//E.scala
class E(d: D)
101
Now if the interface of A.scala is changed the following files will get invalidated:
B.scala, C.scala, D.scala. Both B.scala and C.scala were included through
transtive closure of inheritance dependencies. The E.scala was not included
because E.scala doesn’t depend directly on A.scala.
The distinction between depdencies by inheritance or member reference is a new
feature in sbt 0.13 and is responsible for improved recompilation times in many
cases where deep inheritance chains are not used extensively.
sbt does not instead track dependencies to source code at the granularity of
individual output .class files, as one might hope. Doing so would be incorrect,
because of some problems with sealed classes (see below for discussion).
Dependencies on binary files are different - they are tracked both on the .class
level and on the source file level. Adding a new implementation of a sealed trait
to source file A affects all clients of that sealed trait, and such dependencies are
tracked at the source file level.
Different sources are moreover recompiled together; hence a compile error in
one source implies that no bytecode is generated for any of those. When a lot
of files need to be recompiled and the compile fix is not clear, it might be best
to comment out the offending location (if possible) to allow other sources to
be compiled, and then try to figure out how to fix the offending location—this
way, trying out a possible solution to the compile error will take less time, say
5 seconds instead of 2 minutes.
102
Debugging an interface representation If you see spurious incremental
recompilations or you want understand what changes to an extracted interface
cause incremental recompilation then sbt 0.13 has the right tools for that.
In order to debug the interface representation and its changes as you modify
and recompile source code you need to do two things:
warning
Keep this option enabled when you are debugging incremental com-
piler problem only.
curl -O https://java-diff-utils.googlecode.com/files/diffutils-1.2.1.jar
sbt -Dsbt.extraClasspath=diffutils-1.2.1.jar
[info] Loading project definition from /Users/grek/tmp/sbt-013/project
[info] Set current project to sbt-013 (in build file:/Users/grek/tmp/sbt-013/)
> set incOptions := incOptions.value.copy(apiDebug = true)
[info] Defining *:incOptions
[info] The new value will be used by compile:incCompileSetup, test:incCompileSetup
[info] Reapplying settings...
[info] Set current project to sbt-013 (in build file:/Users/grek/tmp/sbt-013/)
class A {
def b: Int = 123
}
103
class A {
def b: String = "abc"
}
and run compile task again. Now if you run last compile you should see the
following lines in the debugging log
You can see an unified diff of two interface textual represetantions. As you can
see, the incremental compiler detected a change to the return type of b method.
The heuristics used by sbt imply the following user-visible consequences, which
determine whether a change to a class affects other classes.
XXX Please note that this part of the documentation is a first draft; part of the
strategy might be unsound, part of it might be not yet implemented.
104
4. Adding a method which did not exist requires recompiling all clients, coun-
terintuitively, due to complex scenarios with implicit conversions. Hence
in some cases you might want to start implementing a new method in a
separate, new class, complete the implementation, and then cut-n-paste
the complete implementation back into the original source.
5. Changing the implementation of a method should not affect its clients,
unless the return type is inferred, and the new implementation leads to a
slightly different type being inferred. Hence, annotating the return type of
a non-private method explicitly, if it is more general than the type actually
returned, can reduce the code to be recompiled when the implementation
of such a method changes. (Explicitly annotating return types of a public
API is a good practice in general.)
All the above discussion about methods also applies to fields and members in
general; similarly, references to classes also extend to objects and traits.
import java.io._
object A {
def openFiles(list: List[File]) =
list.map(name => new FileWriter(name))
}
Let us now consider the public interface of trait A. Note that the return type
of method openFiles is not specified explicitly, but computed by type infer-
ence to be List[FileWriter]. Suppose that after writing this source code, we
introduce client code and then modify A.scala as follows:
import java.io._
object A {
def openFiles(list: List[File]) =
Vector(list.map(name => new BufferedWriter(new FileWriter(name))): _*)
}
105
1. Concerning our topic, client code needs to be recompiled, since changing
the return type of a method, in the JVM, is a binary-incompatible interface
change.
2. If our component is a released library, using our new version requires
recompiling all client code, changing the version number, and so on. Often
not good, if you distribute a library where binary compatibility becomes
an issue.
3. More in general, client code might now even be invalid. The following
code will for instance become invalid after the change:
val a: Seq[Writer] =
new BufferedWriter(new FileWriter("bar.input")) +:
A.openFiles(List(new File("foo.input")))
XXX the rest of the section must be reintegrated or dropped: In general, chang-
ing the return type of a method might be source-compatible, for instance if the
new type is more specific, or if it is less specific, but still more specific than
the type required by clients (note however that making the type more specific
might still invalidate clients in non-trivial scenarios involving for instance type
inference or implicit conversions—for a more specific type, too many implicit
conversions might be available, leading to ambiguity); however, the bytecode
106
for a method call includes the return type of the invoked method, hence the
client code needs to be recompiled.
Hence, adding explicit return types on classes with many dependencies might
reduce the occasions where client code needs to be recompiled. Moreover, this is
in general a good development practice when interface between different modules
become important—specifying such interface documents the intended behavior
and helps ensuring binary compatibility, which is especially important when the
exposed interface is used by other software component.
Further references
Configuration
This part of the documentation has pages documenting particular sbt topics in
detail. Before reading anything in here, you will need the information in the
Getting Started Guide as a foundation.
This page discusses how sbt builds up classpaths for different actions, like
compile, run, and test and how to override or augment these classpaths.
Basics
In sbt 0.10 and later, classpaths now include the Scala library and (when
declared as a dependency) the Scala compiler. Classpath-related settings
107
and tasks typically provide a value of type Classpath. This is an alias for
Seq[Attributed[File]]. Attributed is a type that associates a heterogeneous
map with each classpath entry. Currently, this allows sbt to associate the
Analysis resulting from compilation with the corresponding classpath entry
and for managed entries, the ModuleID and Artifact that defined the
dependency.
To explicitly extract the raw Seq[File], use the files method implicitly added
to Classpath:
sourceGenerators in Compile +=
generate( (sourceManaged in Compile).value / "some_directory")
In this example, generate is some function of type File => Seq[File] that
actually does the work. So, we are appending a new task to the list of main
source generators (sourceGenerators in Compile).
To insert a named task, which is the better approach for plugins:
mySourceGenerator in Compile :=
generate( (sourceManaged in Compile).value / "some_directory")
108
The task method is used to refer to the actual task instead of the result of the
task.
For resources, there are similar keys resourceGenerators and resourceManaged.
Read more on How to exclude .scala source file in project folder - Google Groups
External vs internal Classpaths are also divided into internal and external
dependencies. The internal dependencies are inter-project dependencies. These
effectively put the outputs of one project on the classpath of another project.
External classpaths are the union of the unmanaged and managed classpaths.
• unmanagedClasspath
• managedClasspath
• externalDependencyClasspath
• internalDependencyClasspath
For sources:
For resources
109
• resourceGenerators These are tasks that generate resource files. Typi-
cally, these tasks will put resources in the directory provided by resource-
Managed.
Example You have a standalone project which uses a library that loads
xxx.properties from classpath at run time. You put xxx.properties inside di-
rectory “config”. When you run “sbt run”, you want the directory to be in
classpath.
There is some special support for using compiler plugins. You can set
autoCompilerPlugins to true to enable this functionality.
autoCompilerPlugins := true
To use a compiler plugin, you either put it in your unmanaged library directory
(lib/ by default) or add it as managed dependency in the plugin configura-
tion. addCompilerPlugin is a convenience method for specifying plugin as the
configuration for a dependency:
The compile and testCompile actions will use any compiler plugins found
in the lib directory or in the plugin configuration. You are responsible for
configuring the plugins as necessary. For example, Scala X-Ray requires the
extra option:
scalacOptions += "-Xplugin:<path-to-sxr>/sxr-0.3.0.jar"
110
Continuations Plugin Example
autoCompilerPlugins := true
scalacOptions += "-P:continuations:enable"
autoCompilerPlugins := true
libraryDependencies +=
compilerPlugin("org.scala-lang.plugins" % "continuations" % scalaVersion.value)
scalacOptions += "-P:continuations:enable"
Configuring Scala
sbt needs to obtain Scala for a project and it can do this automatically or you
can configure it explicitly. The Scala version that is configured for a project
will compile, run, document, and provide a REPL for the project code. When
compiling a project, sbt needs to run the Scala compiler as well as provide
the compiler with a classpath, which may include several Scala jars, like the
reflection jar.
The most common case is when you want to use a version of Scala that is
available in a repository. The only required configuration is the Scala version
you want to use. For example,
scalaVersion := "2.10.0"
This will retrieve Scala from the repositories configured via the resolvers set-
ting. It will use this version for building your project: compiling, running,
scaladoc, and the REPL.
111
Configuring the scala-library dependency By default, the standard Scala
library is automatically added as a dependency. If you want to configure it
differently than the default or you have a project with only Java sources, set:
autoScalaLibrary := false
In order to compile Scala sources, the Scala library needs to be on the classpath.
When autoScalaLibrary is true, the Scala library will be on all classpaths: test,
runtime, and compile. Otherwise, you need to add it like any other dependency.
For example, the following dependency definition uses Scala only for tests:
autoScalaLibrary := false
managedScalaInstance := false
112
managedScalaInstance := false
// Add the usual dependency on the library as well on the compiler in the
// 'scala-tool' configuration
libraryDependencies ++= Seq(
"org.scala-lang" % "scala-library" % scalaVersion.value,
"org.scala-lang" % "scala-compiler" % scalaVersion.value % "scala-tool"
)
managedScalaInstance := false
scalaInstance := ...
scalaHome := Some(file("/home/user/scala-2.10/"))
113
a Scala REPL. No managed dependency is recorded on scala-library. This
means that Scala will only be resolved from a repository if you explicitly define
a dependency on Scala or if Scala is depended on indirectly via a dependency.
In these cases, the artifacts for the resolved dependencies will be substituted
with jars in the Scala home lib/ directory.
scalaHome := Some(file("/home/user/scala-2.10/"))
This will be resolved as normal, except that sbt will see if /home/user/scala-2.10/lib/scala-reflect.jar
exists. If it does, that file will be used in place of the artifact from the managed
dependency.
scalaHome := Some(file("/home/user/scala-2.10/"))
To add only some jars, filter the jars from scalaInstance before adding them.
sbt needs Scala jars to run itself since it is written in Scala. sbt uses that same
version of Scala to compile the build definitions that you write for your project
because they use sbt APIs. This version of Scala is fixed for a specific sbt release
and cannot be changed. For sbt 0.13.5, this version is Scala 2.10.3. Because
this Scala version is needed before sbt runs, the repositories used to retrieve this
version are configured in the sbt launcher.
Forking
By default, the run task runs in the same JVM as sbt. Forking is required under
certain circumstances, however. Or, you might want to fork Java processes when
implementing new tasks.
114
By default, a forked process uses the same Java and Scala versions being used
for the build and the working directory and JVM options of the current process.
This page discusses how to enable and configure forking for both run and test
tasks. Each kind of task may be configured separately by scoping the relevant
keys as explained below.
Enable forking
The fork setting controls whether forking is enabled (true) or not (false). It
can be set in the run scope to only fork run commands or in the test scope to
only fork test commands.
To fork all test tasks (test, testOnly, and testQuick) and run tasks (run,
runMain, test:run, and test:runMain),
fork := true
To enable forking run tasks only, set fork to true in the run scope.
Similarly, set fork in (Compile,run) := true to only fork the main run
tasks. run and runMain share the same configuration and cannot be configured
separately.
To enable forking all test tasks only, set fork to true in the test scope:
See Testing for more control over how tests are assigned to JVMs and what
options to pass to each group.
115
// sets the working directory for all `run`-like tasks
baseDirectory in run := file("/path/to/working/directory/")
or specify the configuration to affect only the main or test run tasks:
Java Home
javaHome := Some(file("/path/to/jre/"))
Note that if this is set globally, it also sets the Java installation used to compile
Java sources. You can restrict it to running only by setting it in the run scope:
As with the other settings, you can specify the configuration to affect only the
main or test run tasks or just the test tasks.
116
Configuring output
By default, forked output is sent to the Logger, with standard output logged at
the Info level and standard error at the Error level. This can be configured
with the outputStrategy setting, which is of type OutputStrategy.
// send output to the provided Logger `log` after the process terminates
outputStrategy := Some(BufferedOutput(log: Logger))
As with other settings, this can be configured individually for main or test run
tasks or for test tasks.
Configuring Input
By default, the standard input of the sbt process is not forwarded to the forked
process. To enable this, configure the connectInput setting:
Direct Usage
To fork a new Java process, use the Fork API. The values of interest are
Fork.java, Fork.javac, Fork.scala, and Fork.scalac. These are of type
Fork and provide apply and fork methods. For example, to fork a new Java
process, :
ForkOptions defines the Java installation to use, the working directory, environ-
ment variables, and more. For example, :
117
val cwd: File = ...
val javaDir: File = ...
val options = ForkOptions(
envVars = Map("KEY" -> "value"),
workingDirectory = Some(cwd),
javaHome = Some(javaDir)
)
Global Settings
To change the default shellPrompt for every project using this approach, create
a local plugin ~/.sbt/0.13/plugins/ShellPrompt.scala:
import sbt._
import Keys._
118
The ~/.sbt/0.13/plugins/ directory is a full project that is included as an
external dependency of every plugin project. In practice, settings and code
defined here effectively work as if they were defined in a project’s project/
directory. This means that ~/.sbt/0.13/plugins/ can be used to try out
ideas for plugins such as shown in the shellPrompt example.
Java Sources
sbt has support for compiling Java sources with the limitation that dependency
tracking is limited to the dependencies present in compiled class files.
Usage
javacOptions += "-g:none"
As with options for the Scala compiler, the arguments are not parsed by sbt.
Multi-element options, such as -source 1.5, are specified like:
You can specify the order in which Scala and Java sources are built with the
compileOrder setting. Possible values are from the CompileOrder enumeration:
Mixed, JavaThenScala, and ScalaThenJava. If you have circular dependencies
between Scala and Java sources, you need the default, Mixed, which passes
both Java and Scala sources to scalac and then compiles the Java sources with
javac. If you do not have circular dependencies, you can use one of the other
two options to speed up your build by not passing the Java sources to scalac.
For example, if your Scala sources depend on your Java sources, but your Java
sources do not depend on your Scala sources, you can do:
compileOrder := CompileOrder.JavaThenScala
To specify different orders for main and test sources, scope the setting by con-
figuration:
119
// Java then Scala for main sources
compileOrder in Compile := CompileOrder.JavaThenScala
However, there should not be any harm in leaving the Scala directories if they
are empty.
Mapping Files
Relative to a directory
The Path.relativeTo method is used to map a File to its path String relative
to a base directory or directories. The relativeTo method accepts a base
120
directory or sequence of base directories to relativize an input file against. The
first directory that is an ancestor of the file is used in the case of a sequence of
base directories.
For example:
import Path.relativeTo
val files: Seq[File] = file("/a/b/C.scala") :: Nil
val baseDirectories: Seq[File] = file("/a") :: Nil
val mappings: Seq[(File,String)] = files pair relativeTo(baseDirectories)
Rebase
The Path.rebase method relativizes an input file against one or more base
directories (the first argument) and then prepends a base String or File (the
second argument) to the result. As with relativeTo, the first base directory
that is an ancestor of the input file is used in the case of multiple base directories.
For example, the following demonstrates building a Seq[(File, String)] using
rebase:
import Path.rebase
val files: Seq[File] = file("/a/b/C.scala") :: Nil
val baseDirectories: Seq[File] = file("/a") :: Nil
val mappings: Seq[(File,String)] = files pair rebase(baseDirectories, "pre/")
import Path.rebase
val files: Seq[File] = file("/a/b/C.scala") :: Nil
val baseDirectories: Seq[File] = file("/a") :: Nil
val newBase: File = file("/new/base")
val mappings: Seq[(File,File)] = files pair rebase(baseDirectories, newBase)
121
Flatten
The Path.flat method provides a function that maps a file to the last compo-
nent of the path (its name). For a File to File mapping, the input file is mapped
to a file with the same name in a given target directory. For example:
import Path.flat
val files: Seq[File] = file("/a/b/C.scala") :: Nil
val mappings: Seq[(File,String)] = files pair flat
import Path.flat
val files: Seq[File] = file("/a/b/C.scala") :: Nil
val newBase: File = file("/new/base")
val mappings: Seq[(File,File)] = files pair flat(newBase)
Alternatives
To try to apply several alternative mappings for a file, use |, which is implicitly
added to a function of type A => Option[B]. For example, to try to relativize
a file against some base directories but fall back to flattening:
import Path.relativeTo
val files: Seq[File] = file("/a/b/C.scala") :: file("/zzz/D.scala") :: Nil
val baseDirectories: Seq[File] = file("/a") :: Nil
val mappings: Seq[(File,String)] = files pair ( relativeTo(baseDirectories) | flat )
Local Scala
To use a locally built Scala version, define the scalaHome setting, which is of
type Option[File]. This Scala version will only be used for the build and not
for sbt, which will still use the version it was compiled against.
Example:
122
scalaHome := Some(file("/path/to/scala"))
Using a local Scala version will override the scalaVersion setting and will not
work with cross building.
sbt reuses the class loader for the local Scala version. If you recompile your
local Scala version and you are using sbt interactively, run
> reload
Macro Projects
Introduction
1. The current macro implementation in the compiler requires that macro im-
plementations be compiled before they are used. The solution is typically
to put the macros in a subproject or in their own configuration.
2. Sometimes the macro implementation should be distributed with the main
code that uses them and sometimes the implementation should not be
distributed at all.
import sbt._
import Keys._
123
This specifies that the macro implementation goes in macro/src/main/scala/
and tests go in macro/src/test/scala/. It also shows that we need a depen-
dency on the compiler for the macro implementation. As an example macro,
we’ll use desugar from macrocosm. macro/src/main/scala/demo/Demo.scala:
package demo
import language.experimental.macros
import scala.reflect.macros.Context
object Demo {
// Returns the tree of `a` after the typer, printed as source code.
def desugar(a: Any): String = macro desugarImpl
val s = show(a.tree)
c.Expr(
Literal(Constant(s))
)
}
}
macro/src/test/scala/demo/Usage.scala:
package demo
object Usage {
def main(args: Array[String]) {
val s = Demo.desugar(List(1, 2, 3).reverse)
println(s)
}
}
package demo
124
object Usage {
def main(args: Array[String]) {
val s = Demo.desugar(List(6, 4, 5).sorted)
println(s)
}
}
Common Interface
Sometimes, the macro implementation and the macro usage should share some
common code. In this case, declare another subproject for the common code and
have the main project and the macro subproject depend on the new subproject.
For example, the project definitions from above would look like:
Distribution
To include the macro code with the main code, add the binary and source
mappings from the macro subproject to the main project. For example, the
main Project definition above would now look like:
You may wish to disable publishing the macro implementation. This is done by
overriding publish and publishLocal to do nothing:
The techniques described here may also be used for the common interface de-
scribed in the previous section.
125
Paths
This page describes files, sequences of files, and file filters. The base type used
is java.io.File, but several methods are augmented through implicits:
Constructing a File
sbt 0.10+ uses java.io.File to represent a file instead of the custom sbt.Path
class that was in sbt 0.7 and earlier. sbt defines the alias File for java.io.File
so that an extra import is not necessary. The file method is an alias for the
single-argument File constructor to simplify constructing a new file from a
String:
Additionally, sbt augments File with a / method, which is an alias for the two-
argument File constructor for building up a path:
Relative files should only be used when defining the base directory of a Project,
where they will be resolved properly.
This setting sets the location of the shell history to be in the base directory of
the build, irrespective of the project the setting is defined in:
126
Path Finders
get This selects all files that end in .scala that are in src or a descendent
directory. The list of files is not actually evaluated until get is called:
If the filesystem changes, a second call to get on the same PathFinder object
will reflect the changes. That is, the get method reconstructs the list of files
each time. Also, get only returns Files that existed at the time it was called.
This selects all files that end in .scala that are in the src directory.
Name Filter The argument to the child and descendent selectors * and ** is
actually a NameFilter. An implicit is used to convert a String to a NameFilter
that interprets * to represent zero or more characters of any value. See the Name
Filters section below for more information.
127
Combining PathFinders Another operation is concatenation of PathFinders:
When evaluated using get, this will return src/main/, lib/, and
target/classes/. The concatenated finder supports all standard meth-
ods. For example,
The first selector selects all Scala sources and the second selects all sources that
are a descendent of a .svn directory. The --- method removes all files returned
by the second selector from the sequence of files returned by the first selector.
Filtering There is a filter method that accepts a predicate of type File =>
Boolean and is non-strict:
128
PathFinder to String conversions Convert a PathFinder to a String using
one of the following methods:
Mappings The packaging and file copying methods in sbt expect values of
type Seq[(File,String)] and Seq[(File,File)], respectively. These are
mappings from the input file to its (String) path in the jar or its (File) destina-
tion. This approach replaces the relative path approach (using the ## method)
from earlier versions of sbt.
Mappings are discussed in detail on the Mapping-Files page.
File Filters
There are some useful combinators added to FileFilter. The || method de-
clares alternative FileFilters. The following example selects all Java or Scala
source files under “src”:
The -- method excludes a files matching a second filter from the files matched
by the first:
This will get right.png and left.png, but not logo.png, for example.
129
Parallel Execution
Task ordering
read := IO.read(file("/tmp/sample.txt"))
sbt is free to execute write first and then read, read first and then write, or
read and write simultaneously. Execution of these tasks is non-deterministic
because they share a file. A correct declaration of the tasks would be:
write := {
val f = file("/tmp/sample.txt")
IO.write(f, "Some content.")
f
}
read := IO.read(write.value)
This establishes an ordering: read must run after write. We’ve also guaranteed
that read will read from the same file that write created.
Practical constraints
Note: The feature described in this section is experimental. The default config-
uration of the feature is subject to change in particular.
130
2. Enabling or disabling mapping tests to their own tasks (parallelExecution
in Test := false, for example).
Configuration
The system is thus dependent on proper tagging of tasks and then on a good
set of rules.
compile := myCompileTask.value
download := downloadImpl.value
131
Defining Restrictions Once tasks are tagged, the concurrentRestrictions
setting sets restrictions on the tasks that may be concurrently executed based
on the weighted tags of those tasks. This is necessarily a global set of rules, so
it must be scoped in Global. For example,
Note that these restrictions rely on proper tagging of tasks. Also, the value
provided as the limit must be at least 1 to ensure every task is able to be
executed. sbt will generate an error if this condition is not met.
Most tasks won’t be tagged because they are very short-lived. These tasks are
automatically assigned the label Untagged. You may want to include these tasks
in the CPU rule by using the limitSum method. For example:
...
Tags.limitSum(2, Tags.CPU, Tags.Untagged)
...
Note that the limit is the first argument so that tags can be provided as varargs.
Another useful convenience function is Tags.exclusive. This specifies that a
task with the given tag should execute in isolation. It starts executing only
when no other tasks are running (even if they have the exclusive tag) and no
other tasks may start execution until it completes. For example, a task could
be tagged with a custom tag Benchmark and a rule configured to ensure such a
task is executed by itself:
...
Tags.exclusive(Benchmark)
...
132
Finally, for the most flexibility, you can specify a custom function of type
Map[Tag,Int] => Boolean. The Map[Tag,Int] represents the weighted
tags of a set of tasks. If the function returns true, it indicates that the
set of tasks is allowed to execute concurrently. If the return value is false,
the set of tasks will not be allowed to execute concurrently. For example,
Tags.exclusive(Benchmark) is equivalent to the following:
...
Tags.customLimit { (tags: Map[Tag,Int]) =>
val exclusive = tags.getOrElse(Benchmark, 0)
// the total number of tasks in the group
val all = tags.getOrElse(Tags.All, 0)
// if there are no exclusive tasks in this group, this rule adds no restrictions
exclusive == 0 ||
// If there is only one task, allow it to execute.
all == 1
}
...
There are some basic rules that custom functions must follow, but the main one
to be aware of in practice is that if there is only one task, it must be allowed to
execute. sbt will generate a warning if the user defines restrictions that prevent
a task from executing at all and will then execute the task anyway.
Built-in Tags and Rules Built-in tags are defined in the Tags object. All
tags listed below must be qualified by this object. For example, CPU refers to
the Tags.CPU value.
The built-in semantic tags are:
133
The tasks that are currently tagged by default are:
Of additional note is that the default test task will propagate its tags to each
child task created for each test class.
The default rules provide the same behavior as previous versions of sbt:
concurrentRestrictions in Global := {
val max = Runtime.getRuntime.availableProcessors
Tags.limitAll(if(parallelExecution.value) max else 1) :: Nil
}
Custom Tags To define a new tag, pass a String to the Tags.Tag method.
For example:
aCustomTask := aImpl.value
concurrentRestrictions in Global +=
Tags.limit(Custom, 1)
Future work
This is an experimental feature and there are several aspects that may change
or require further work.
134
Tagging Tasks Currently, a tag applies only to the immediate computation
it is defined on. For example, in the following, the second compile definition has
no tags applied to it. Only the first computation is labeled.
compile := myCompileTask.value
compile := {
val result = compile.value
... do some post processing ...
}
Default Behavior User feedback on what custom rules work for what work-
loads will help determine a good set of default tags and rules.
135
External Processes
Usage
sbt includes a process library to simplify working with external processes. The
library is available without import in build definitions and at the interpreter
started by the consoleProject task.
To run an external command, follow it with an exclamation mark !:
If you need to set the working directory or modify the environment, call
sbt.Process explicitly, passing the command sequence (command and argu-
ment list) or command string first and the working directory second. Any
environment variables can be passed as a vararg list of key/value String pairs.
Process("ls" :: "-l" :: Nil, Path.userHome, "key1" -> value1, "key2" -> value2) ! log
• a #&& b Execute a. If the exit code is nonzero, return that exit code and
do not execute b. If the exit code is zero, execute b and return its exit
code.
136
• a #|| b Execute a. If the exit code is zero, return zero for the exit code
and do not execute b. If the exit code is nonzero, execute b and return
its exit code.
• a #| b Execute a and b, piping the output of a to the input of b.
There are also operators defined for redirecting output to Files and input from
Files and URLs. In the following definitions, url is an instance of URL and file
is an instance of File.
• a #< url or url #> a Use url as the input to a. a may be a File or a
command.
• a #< file or file #> a Use file as the input to a. a may be a File or
a command.
• a #> file or file #< a Write the output of a to file. a may be a File,
URL, or a command.
• a #>> file or file #<< a Append the output of a to file. a may be a
File, URL, or a command.
There are some additional methods to get the output from a forked process into
a String or the output lines as a Stream[String]. Here are some examples,
but see the ProcessBuilder API for details.
Finally, there is a cat method to send the contents of Files and URLs to standard
output.
Copy a File:
137
url("http://databinder.net/dispatch/About") #> "grep JSON" #>> file("About_JSON") !
// or
file("About_JSON") #<< ( "grep JSON" #< url("http://databinder.net/dispatch/About") ) !
"find src -name *.scala -exec grep null {} ;" #| "xargs test -z" #&& "echo null-free" #
Use cat:
The run and console actions provide a means for running user code in the same
virtual machine as sbt. This page describes the problems with doing so, how
sbt handles these problems, what types of code can use this feature, and what
types of code must use a forked jvm. Skip to User Code if you just want to see
when you should use a forked jvm.
Problems
System.exit User code can call System.exit, which normally shuts down the
JVM. Because the run and console actions run inside the same JVM as sbt,
this also ends the build and requires restarting sbt.
Threads User code can also start other threads. Threads can be left running
after the main method returns. In particular, creating a GUI creates several
threads, some of which may not terminate until the JVM terminates. The
program is not completed until either System.exit is called or all non-daemon
threads terminate.
138
sbt’s Solutions
Threads sbt makes a list of all threads running before executing user code.
After the user code returns, sbt can then determine the threads created by the
user code. For each user-created thread, sbt replaces the uncaught exception
handler with a custom one that handles the custom SecurityException thrown
by calls to System.exit and delegates to the original handler for everything else.
sbt then waits for each created thread to exit or for System.exit to be called.
sbt handles a call to System.exit as described above.
A user-created thread is one that is not in the system thread group and is not
an AWT implementation thread (e.g. AWT-XAWT, AWT-Windows). User-created
threads include the AWT-EventQueue-* thread(s).
User Code Given the above, when can user code be run with the run and
console actions?
The user code cannot rely on shutdown hooks and at least one of the following
situations must apply for user code to run in the same JVM:
The requirements on threading and shutdown hooks are required because the
JVM does not actually shut down. So, shutdown hooks cannot be run and
threads are not terminated unless they stop when interrupted. If these require-
ments are not met, code must run in a forked jvm.
The feature of allowing System.exit and multiple threads to be used cannot
completely emulate the situation of running in a separate JVM and is intended
139
for development. Program execution should be checked in a forked jvm when
using multiple threads or System.exit.
As of sbt 0.13.1, multiple run instances can be managed. There can only be one
application that uses AWT at a time, however.
Testing
Basics
The resources may be accessed from tests by using the getResource methods
of java.lang.Class or java.lang.ClassLoader.
The main Scala testing frameworks ( specs2, ScalaCheck, and ScalaTest) provide
an implementation of the common test interface and only need to be added to the
classpath to work with sbt. For example, ScalaCheck may be used by declaring
it as a managed dependency:
The fourth component "test" is the configuration and means that ScalaCheck
will only be on the test classpath and it isn’t needed by the main sources. This
is generally good practice for libraries because your users don’t typically need
your test dependencies to use your library.
With the library dependency defined, you can then add test sources in the
locations listed above and compile and run tests. The tasks for running tests
are test and testOnly. The test task accepts no command line arguments
and runs all tests:
> test
testOnly The testOnly task accepts a whitespace separated list of test names
to run. For example:
140
testQuick The testQuick task, like testOnly, allows to filter the tests to
run to specific tests or wildcards using the same syntax to indicate the filters.
In addition to the explicit filter, only the tests that satisfy one of the following
conditions are run:
Tab completion Tab completion is provided for test names based on the
results of the last test:compile. This means that a new sources aren’t available
for tab completion until they are compiled and deleted sources won’t be removed
from tab completion until a recompile. A new test source can still be manually
written out and run using testOnly.
Other tasks Tasks that are available for main sources are generally available
for test sources, but are prefixed with test: on the command line and are
referenced in Scala code with in Test. These tasks include:
• test:compile
• test:console
• test:consoleQuick
• test:run
• test:runMain
Output
By default, logging is buffered for each test source file until all tests for that file
complete. This can be disabled by setting logBuffered:
Test Reports By default, sbt will generate JUnit XML test reports for all
tests in the build, located in the target/test-reports directory for a project.
This can be disabled by disabling the JUnitXmlReportPlugin
141
Options
To specify test framework arguments as part of the build, add options con-
structed by Tests.Argument:
Setup and Cleanup Specify setup and cleanup actions using Tests.Setup
and Tests.Cleanup. These accept either a function of type () => Unit or a
function of type ClassLoader => Unit. The variant that accepts a ClassLoader
is passed the class loader that is (or was) used for running the tests. It provides
access to the test classes as well as the test framework classes.
Examples:
Disable Parallel Execution of Tests By default, sbt runs all tasks in par-
allel and within the same JVM as sbt itself. Because each test is mapped to
a task, tests are also run in parallel by default. To make tests within a given
project execute serially: :
142
Filter classes If you want to only run test classes whose name ends with
“Test”, use Tests.Filter:
specifies that all tests will be executed in a single external JVM. See Forking
for configuring standard options for forking. By default, tests executed in a
forked JVM are executed sequentially. More control over how tests are assigned
to JVMs and what options to pass to those is available with testGrouping key.
For example in build.sbt:
import Tests._
{
def groupByFirst(tests: Seq[TestDefinition]) =
tests groupBy (_.name(0)) map {
case (letter, tests) => new Group(letter.toString, tests, SubProcess(Seq("-Dfirst.lett
} toSeq
The tests in a single group are run sequentially. Control the number of
forked JVMs allowed to run at the same time by setting the limit on
Tags.ForkedTestGroup tag, which is 1 by default. Setup and Cleanup actions
cannot be provided with the actual test class loader when a group is forked.
In addition, forked tests can optionally be run in parallel. This feature is still
considered experimental, and may be enabled with the following setting :
You can add an additional test configuration to have a separate set of test
sources and associated compilation, packaging, and testing tasks and settings.
The steps are:
143
• Add the tasks and settings
• Declare library dependencies
• Create sources
• Run tasks
The following two examples demonstrate this. The first example shows how
to enable integration tests. The second shows how to define a customized test
configuration. This allows you to define multiple types of tests per project.
import sbt._
import Keys._
The standard testing tasks are available, but must be prefixed with it:. For
example,
144
> it:testOnly org.example.AnIntegrationTest
then these will be picked up by the Test configuration and in turn by the
IntegrationTest configuration. Options can be added specifically for integra-
tion tests by putting them in the IntegrationTest configuration:
Or, use := to overwrite any existing options, declaring these to be the definitive
integration test options:
import sbt._
import Keys._
145
settings( inConfig(FunTest)(Defaults.testSettings) : _*)
This says to add test and settings tasks in the FunTest configuration.
We could have done it this way for integration tests as well. In fact,
Defaults.itSettings is a convenience definition: val itSettings =
inConfig(IntegrationTest)(Defaults.testSettings).
The comments in the integration test section hold, except with IntegrationTest
replaced with FunTest and "it" replaced with "fun". For example, test options
can be configured specifically for FunTest:
> fun:test
import sbt._
import Keys._
146
• We are now only adding the test tasks (inConfig(FunTest)(Defaults.testTasks))
and not compilation and packaging tasks and settings.
• We filter the tests to be run for each configuration.
> test
To run tests for the added configuration (here, "fun"), prefix it with the config-
uration name as before:
> fun:test
> fun:testOnly org.example.AFunTest
The tests to run in parallel would be run with test and the ones to run in serial
would be run with serial:test.
JUnit
Support for JUnit is provided by junit-interface. To add JUnit support into your
project, add the junit-interface dependency in your project’s main build.sbt file.
Extensions
This page describes adding support for additional testing libraries and defining
additional test reporters. You do this by implementing sbt interfaces (described
below). If you are the author of the testing framework, you can depend on
the test interface as a provided dependency. Alternatively, anyone can provide
support for a test framework by implementing the interfaces in a separate project
and packaging the project as an sbt Plugin.
147
Custom Test Framework The main Scala testing libraries have built-in sup-
port for sbt. To add support for a different framework, implement the uniform
test interface.
Custom Test Reporters Test frameworks report status and results to test
reporters. You can create a new test reporter by implementing either TestRe-
portListener or TestsListener.
Specify the test reporters you want to use by overriding the testListeners
setting in your project definition.
testListeners += customTestListener
Dependency Management
This part of the documentation has pages documenting particular sbt topics in
detail. Before reading anything in here, you will need the information in the
Getting Started Guide as a foundation.
Artifacts
By default, the published artifacts are the main binary jar, a jar containing the
main sources and resources, and a jar containing the API documentation. You
can add artifacts for the test classes, sources, or API or you can disable some
of the main artifacts.
To add all test artifacts:
148
// enable publishing the jar produced by `test:package`
publishArtifact in (Test, packageBin) := true
149
artifactName := { (sv: ScalaVersion, module: ModuleID, artifact: Artifact) =>
artifact.name + "-" + module.revision + "." + artifact.extension
}
myTask := {
val (art, file) = packagedArtifact.in(Compile, packageBin).value
println("Artifact definition: " + art)
println("Packaged file: " + file.getAbsolutePath)
}
In addition to configuring the built-in artifacts, you can declare other artifacts
to publish. Multiple artifacts are allowed when using Ivy metadata, but a Maven
POM file only supports distinguishing artifacts based on classifiers and these
are not recorded in the POM.
Basic Artifact construction look like:
For example:
See the Ivy documentation for more details on artifacts. See the Artifact API
for combining the parameters above and specifying Configurations and extra
attributes.
To declare these artifacts for publishing, map them to the task that generates
the artifact:
150
val myImageTask = taskKey[File](...)
myImageTask := {
val artifact: File = makeArtifact(...)
artifact
}
...
lazy val proj = Project(...).
settings( addArtifact(...).settings : _* )
...
A common use case for web applications is to publish the .war file instead of
the .jar file.
To specify the artifacts to use from a dependency that has custom or multiple
artifacts, use the artifacts method on your dependencies. For example:
The from and classifer methods (described on the Library Management page)
are actually convenience methods that translate to artifacts:
151
def from(url: String) = artifacts( Artifact(name, new URL(url)) )
def classifier(c: String) = artifacts( Artifact(name, c) )
Background
152
5. Overriding all of the above, skip in update := true will tell sbt to
never perform resolution. Note that this can cause dependent tasks to
fail. For example, compilation may fail if jars have been deleted from the
cache (and so needed classes are missing) or a dependency has been added
(but will not be resolved because skip is true). Also, update itself will
immediately fail if resolution has not been allowed to run since the last
clean.
A. Run update explicitly. This will typically fix problems with out of date
SNAPSHOTs or locally published artifacts.
B. If a file cannot be found, look at the output of update to see where Ivy
is looking for the file. This may help diagnose an incorrectly defined
dependency or a dependency that is actually not present in a repository.
C. last update contains more information about the most recent resolution
and download. The amount of debugging output from Ivy is high, so you
may want to use lastGrep (run help lastGrep for usage).
D. Run clean and then update. If this works, it could indicate a bug in sbt,
but the problem would need to be reproduced in order to diagnose and fix
it.
E. Before deleting all of the Ivy cache, first try deleting files in
~/.ivy2/cache related to problematic dependencies. For example,
if there are problems with dependency "org.example" % "demo" %
"1.0", delete ~/.ivy2/cache/org.example/demo/1.0/ and retry update.
This avoids needing to redownload all dependencies.
F. Normal sbt usage should not require deleting files from ~/.ivy2/cache,
especially if the first four steps have been followed. If deleting the cache
fixes a dependency management issue, please try to reproduce the issue
and submit a test case.
Plugins
These troubleshooting steps can be run for plugins by changing to the build def-
inition project, running the commands, and then returning to the main project.
For example:
153
Notes
Library Management
There’s now a getting started page about library management, which you may
want to read first.
Documentation Maintenance Note: it would be nice to remove the overlap be-
tween this page and the getting started page, leaving this page with the more
advanced topics such as checksums and external Ivy files.
Introduction
There are two ways for you to manage libraries with sbt: manually or auto-
matically. These two ways can be mixed as well. This page discusses the two
approaches. All configurations shown here are settings that go either directly
in a .sbt file or are appended to the settings of a Project in a .scala file.
Manually managing dependencies involves copying any jars that you want to use
to the lib directory. sbt will put these jars on the classpath during compilation,
testing, running, and when using the interpreter. You are responsible for adding,
removing, updating, and otherwise managing the jars in this directory. No
modifications to your project definition are required to use this method unless
you would like to change the location of the directory you store the jars in.
154
To change the directory jars are stored in, change the unmanagedBase setting
in your project definition. For example, to use custom_lib/:
If you want more control and flexibility, override the unmanagedJars task, which
ultimately provides the manual dependencies to sbt. The default implementa-
tion is roughly:
If you want to add jars from multiple directories in addition to the default
directory, you can do:
sbt uses Apache Ivy to implement dependency management in all three cases.
The default is to use inline declarations, but external configuration can be ex-
plicitly selected. The following sections describe how to use each method of
automatic dependency management.
155
Dependencies Declaring a dependency looks like:
or
If you are using a dependency that was built with sbt, double the first % to be
%%:
This will use the right jar for the dependency built with the version of Scala
that you are currently using. If you get an error while resolving this kind of
dependency, that dependency probably wasn’t published for the version of Scala
you are using. See Cross Build for details.
Ivy can select the latest revision of a module according to constraints you specify.
Instead of a fixed revision like "1.6.1", you specify "latest.integration",
"2.9.+", or "[1.0,)". See the Ivy revisions documentation for details.
For example:
156
sbt can search your local Maven repository if you add it as a repository:
To use the local repository, but not the Maven Central repository:
Override all resolvers for all builds The repositories used to retrieve sbt,
Scala, plugins, and application dependencies can be configured globally and de-
clared to override the resolvers configured in a build or plugin definition. There
are two parts:
[repositories]
local
my-maven-repo: http://example.org/repo
my-ivy-repo: http://example.org/ivy-repo/, [organization]/[module]/[revision]/[type]s/[artifa
157
Explicit URL If your project requires a dependency that is not present in a
repository, a direct URL to its jar can be specified as follows:
The URL is only used as a fallback if the dependency cannot be found through
the configured repositories. Also, the explicit URL is not included in published
metadata (that is, the pom or ivy.xml).
Classifiers You can specify the classifier for a dependency using the
classifier method. For example, to get the jdk15 version of TestNG:
libraryDependencies +=
"org.lwjgl.lwjgl" % "lwjgl-platform" % lwjglVersion classifier "natives-windows" classifie
transitiveClassifiers := Seq("sources")
libraryDependencies +=
"log4j" % "log4j" % "1.2.15" exclude("javax.jms", "jms")
158
The excludeAll method is more flexible, but because it cannot be represented
in a pom.xml, it should only be used when a pom doesn’t need to be generated.
For example,
libraryDependencies +=
"log4j" % "log4j" % "1.2.15" excludeAll(
ExclusionRule(organization = "com.sun.jdmk"),
ExclusionRule(organization = "com.sun.jmx"),
ExclusionRule(organization = "javax.jms")
)
libraryDependencies +=
"org.apache.felix" % "org.apache.felix.framework" % "1.8.0" withSources() withJavadoc()
Note that this is not transitive. Use the update-*classifiers tasks for that.
projectID := {
val previous = projectID.value
previous.extra("color" -> "blue", "component" -> "compiler-interface")
}
159
Inline Ivy XML sbt additionally supports directly specifying the configura-
tions or dependencies sections of an Ivy configuration file inline. You can mix
this with inline Scala dependency and repository declarations.
For example:
ivyXML :=
<dependencies>
<dependency org="javax.mail" name="mail" rev="1.4.2">
<exclude module="activation"/>
</dependency>
</dependencies>
Ivy Home Directory By default, sbt uses the standard Ivy home direc-
tory location ${user.home}/.ivy2/. This can be configured machine-wide, for
use by both the sbt launcher and by projects, by setting the system property
sbt.ivy.home in the sbt startup script (described in Setup).
For example:
160
Conflict Management The conflict manager decides what to do when de-
pendency resolution brings in different versions of the same library. By default,
the latest revision is selected. This can be changed by setting conflictManager,
which has type ConflictManager. See the Ivy documentation for details on the
different conflict managers. For example, to specify that no conflicts are allowed,
conflictManager := ConflictManager.strict
With this set, any conflicts will generate an error. To resolve a conflict,
The default conflict manager will select the newer version of log4j, 1.2.16. This
can be confirmed in the output of show update, which shows the newer version
as being selected and the older version as not selected:
To say that we prefer the version we’ve specified over the version from indirect
dependencies, use force():
161
The output of show update is now reversed:
The default conflict manager chooses the latest revision of log4j, 1.2.17:
This will not add a direct dependency on log4j, but will force the revision to be
1.2.16. This is confirmed by the output of show update:
162
Note: this is an Ivy-only feature and will not be included in a
published pom.xml.
Configurations Ivy configurations are a useful feature for your build when
you need custom groups of dependencies, such as for a plugin. Ivy configurations
are essentially named sets of dependencies. You can read the Ivy documentation
for details.
The built-in use of configurations in sbt is similar to scopes in Maven. sbt adds
dependencies to different classpaths by the configuration that they are defined
in. See the description of Maven Scopes for details.
You put a dependency in a configuration by selecting one or more of its con-
figurations to map to one or more of your project’s configurations. The most
common case is to have one of your configurations A use a dependency’s config-
uration B. The mapping for this looks like "A->B". To apply this mapping to a
dependency, add it to the end of your dependency definition:
This says that your project’s "test" configuration uses ScalaTest’s "compile"
configuration. See the Ivy documentation for more advanced mappings. Most
projects published to Maven repositories will use the "compile" configuration.
A useful application of configurations is to group dependencies that are not used
on normal classpaths. For example, your project might use a "js" configuration
to automatically download jQuery and then include it in your jar by modifying
resources. For example:
The config method defines a new configuration with name "js" and makes it
private to the project so that it is not used for publishing. See Update Report
for more information on selecting managed artifacts.
A configuration without a mapping (no "->") is mapped to "default" or
"compile". The -> is only needed when mapping to a different configuration
than those. The ScalaTest dependency above can then be shortened to:
163
External Maven or Ivy For this method, create the configuration files as you
would for Maven (pom.xml) or Ivy (ivy.xml and optionally ivysettings.xml).
External configuration is selected by using one of the following expressions.
externalIvySettings()
or
externalIvySettings(baseDirectory.value / "custom-settings-name.xml")
or
externalIvySettingsURL(url("your_url_here"))
externalIvyFile()
or
externalIvyFile(Def.setting(baseDirectory.value / "custom-name.xml"))
Because Ivy files specify their own configurations, sbt needs to know which con-
figurations to use for the compile, runtime, and test classpaths. For example,
to specify that the Compile classpath should use the ‘default’ configuration:
externalPom()
or
externalPom(Def.setting(baseDirectory.value / "custom-name.xml"))
164
Full Ivy Example For example, a build.sbt using external Ivy files might
look like:
externalIvySettings()
externalIvyFile(Def.setting(baseDirectory.value / "ivyA.xml"))
Proxy Repositories
It’s often the case that users wish to set up a maven/ivy proxy repository in-
side their corporate firewall, and have developer sbt instances resolve artifacts
through such a proxy. Let’s detail what exact changes must be made for this
to work.
Overview
The situation arises when many developers inside an organization are attempting
to resolve artifacts. Each developer’s machine will hit the internet and download
an artifact, regardless of whether or not another on the team has already done so.
Proxy repositories provide a single point of remote download for an organization.
In addition to control and security concerns, Proxy repositories are primarily
important for increased speed across a team.
There are many good proxy repository solutions out there, with the big three
being (in alphabetical order):
• Archiva
• Artifactory
• Nexus
165
Figure 1: image
166
Once you have a proxy repository installed and configured, then it’s time to
configure sbt for your needs. Read the note at the bottom about proxy issues
with ivy repositories.
sbt Configuration
sbt requires configuration in two places to make use of a proxy repository. The
first is the ~/.sbt/repositories file, and the second is the launcher script.
~/.sbt/repositories
The repositories file is an external configuration for the Launcher. The exact
syntax for the configuration file is detailed in the sbt Launcher.
Here’s an example config:
[repositories]
local
my-ivy-proxy-releases: http://repo.company.com/ivy-releases/, [organization]/[module]/(scal
my-maven-proxy-releases: http://repo.company.com/maven-releases/
Launcher Script The sbt launcher supports two configuration options that al-
low the usage of proxy repositories. The first is the sbt.override.build.repos
setting and the second is the sbt.repository.config setting.
-Dsbt.override.build.repos=true
167
sbt.repository.config If you are unable to create a ~/.sbt/repositories
file, due to user permission errors or for convenience of developers, you can
modify the sbt start script directly with the following:
-Dsbt.repository.config=<path-to-your-repo-file>
This is only necessary if users do not already have their own default repository
file.
The most common mistake made when setting up a proxy repository for sbt is
the attempting to merge both maven and ivy repositories into the same proxy
repository. While some repository managers will allow this, it’s not recom-
mended to do so.
Even if your company does not use ivy, sbt uses a custom layout to handle
binary compatibility constraints of its own plugins. To ensure that these are
resolved correctly, simple set up two virtual/proxy repositories, one for maven
and one for ivy.
Here’s an example setup:
Publishing
This page describes how to publish your project. Publishing consists of upload-
ing a descriptor, such as an Ivy file or Maven POM, and artifacts, such as a
jar or war, to a repository so that other projects can specify your project as a
dependency.
The publish action is used to publish your project to a remote repository.
To use publishing, you need to specify the repository to publish to and the
credentials to use. Once these are set up, you can run publish.
The publishLocal action is used to publish your project to a local Ivy repository.
You can then use this project from other projects on the same machine.
168
Figure 2: image
169
publishTo := Some(Resolver.file("file", new File( "path/to/my/maven-repo/releases" )) )
If you’re using Maven repositories you will also have to select the right repository
depending on your artifacts: SNAPSHOT versions go to the /snapshot repos-
itory while other versions go to the /releases repository. Doing this selection
can be done by using the value of the version SettingKey:
publishTo := {
val nexus = "https://oss.sonatype.org/"
if (version.value.trim.endsWith("SNAPSHOT"))
Some("snapshots" at nexus + "content/repositories/snapshots")
else
Some("releases" at nexus + "service/local/staging/deploy/maven2")
}
Credentials
There are two ways to specify credentials for such a repository. The first is to
specify them inline:
The second and better way is to load them from a file, for example:
The credentials file is a properties file with keys realm, host, user, and
password. For example:
Cross-publishing
170
Published artifacts
By default, the main binary jar, a sources jar, and a API documentation jar
are published. You can declare other types of artifacts to publish and disable
or modify the default artifacts. See the Artifacts page for details.
pomExtra :=
<licenses>
<license>
<name>Apache 2</name>
<url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
<distribution>repo</distribution>
</license>
</licenses>
makePom adds to the POM any Maven-style repositories you have declared. You
can filter these by modifying pomRepositoryFilter, which by default excludes
local repositories. To instead only include local repositories:
There is also a pomPostProcess setting that can be used to manipulate the final
XML before it is written. It’s type is Node => Node.
Publishing Locally
The publishLocal command will publish to the local Ivy repository. By default,
this is in ${user.home}/.ivy2/local. Other projects on the same machine can
then list the project as a dependency. For example, if the SBT project you are
publishing has configuration parameters like:
171
name := "My Project"
organization := "org.me"
version := "0.1-SNAPSHOT"
The version number you select must end with SNAPSHOT, or you must change
the version number each time you publish. Ivy maintains a cache, and it stores
even local projects in that cache. If Ivy already has a version cached, it will
not check the local repository for updates, unless the version number matches
a changing pattern, and SNAPSHOT is one such pattern.
Resolvers
Maven
resolvers +=
"Sonatype OSS Snapshots" at "https://oss.sonatype.org/content/repositories/snapshots"
This is the most common kind of user-defined resolvers. The rest of this page
describes how to define other types of repositories.
Predefined
For example, to use the java.net repository, use the following setting in your
build definition:
resolvers += JavaNet1Repository
Predefined repositories will go under Resolver going forward so they are in one
place:
Resolver.sonatypeRepo("releases") // Or "snapshots"
172
Custom
sbt provides an interface to the repository types available in Ivy: file, URL, SSH,
and SFTP. A key feature of repositories in Ivy is using patterns to configure
repositories.
Construct a repository definition using the factory in sbt.Resolver for the
desired type. This factory creates a Repository object that can be further
configured. The following table contains links to the Ivy documentation for the
repository type and the API documentation for the factory and repository class.
The SSH and SFTP repositories are configured identically except for the name
of the factory. Use Resolver.ssh for SSH and Resolver.sftp for SFTP.
Type
Factory
Ivy Docs
Factory API
Repository Class API
Filesystem
Resolver.file
Ivy filesystem
filesystem factory
FileRepository API
SFTP
Resolver.sftp
Ivy sftp
sftp factory
SftpRepository API
SSH
Resolver.ssh
Ivy ssh
ssh factory
SshRepository API
URL
Resolver.url
Ivy url
173
url factory
URLRepository API
Basic Examples These are basic examples that use the default Maven-style
repository layout.
or customize the layout pattern described in the Custom Layout section below.
Authentication for the repositories returned by sftp and ssh can be configured
by the as methods.
To use password authentication:
174
or to be prompted for the password:
resolvers += {
val keyFile: File = ...
Resolver.ssh("my-ssh-repo", "example.org") as("user", keyFile, "keyFilePassword")
}
Custom Layout These examples specify custom repository layouts using pat-
terns. The factory methods accept an Patterns instance that defines the pat-
terns to use. The patterns are first resolved against the base file or URL. The
default patterns give the default Maven-style layout. Provide a different Pat-
terns object to use a different layout. For example:
You can specify multiple patterns or patterns for the metadata and artifacts
separately. You can also specify whether the repository should be Maven com-
patible (as defined by Ivy). See the patterns API for the methods to use.
For filesystem and URL repositories, you can specify absolute patterns by omit-
ting the base URL, passing an empty Patterns instance, and using ivys and
artifacts:
175
Update Report
Any argument to select may be omitted, in which case all values are allowed
for the corresponding component. For example, if the ConfigurationFilter
is not specified, all configurations are accepted. The individual filter types are
discussed below.
Filter Basics Configuration, module, and artifact filters are typically built
by applying a NameFilter to each component of a Configuration, ModuleID,
or Artifact. A basic NameFilter is implicitly constructed from a String, with
* interpreted as a wildcard.
import sbt._
// each argument is of type NameFilter
val mf: ModuleFilter = moduleFilter(organization = "*sbt*",
name = "main" | "actions", revision = "1.*" - "1.0")
176
// unspecified arguments match everything by default
val mf: ModuleFilter = moduleFilter(organization = "net.databinder")
import sbt._
// here the function value of type String => Boolean is implicitly converted to a NameFilter
val nf: NameFilter = (s: String) => s.startsWith("dispatch-")
import sbt._
val a: ConfigurationFilter = Set("compile", "test")
val b: ConfigurationFilter = (c: String) => c.startsWith("r")
val c: ConfigurationFilter = a | b
177
ModuleFilter A module filter is defined by three NameFilters: one for the
organization, one for the module name, and one for the revision. Each compo-
nent filter must match for the whole module filter to match. A module filter is
explicitly constructed by the moduleFilter method:
import sbt._
val a: ModuleFilter = moduleFilter(name = "dispatch-twitter", revision = "0.7.8")
val b: ModuleFilter = moduleFilter(name = "dispatch-*")
val c: ModuleFilter = b - a
import sbt._
val a: ArtifactFilter = artifactFilter(classifier = "javadoc")
val b: ArtifactFilter = artifactFilter(`type` = "jar")
val c: ArtifactFilter = b - a
178
DependencyFilter A DependencyFilter is typically constructed by com-
bining other DependencyFilters together using &&, ||, and --. Configuration,
module, and artifact filters are DependencyFilters themselves and can be used
directly as a DependencyFilter or they can build up a DependencyFilter.
Note that the symbols for the DependencyFilter combining methods are dou-
bled up to distinguish them from the combinators of the more specific filters
for configurations, modules, and artifacts. These double-character methods will
always return a DependencyFilter, whereas the single character methods pre-
serve the more specific filter type. For example:
import sbt._
Here, we used && and || to combine individual component filters into a depen-
dency filter, which can then be provided to the UpdateReport.matches method.
Alternatively, the UpdateReport.select method may be used, which is equiv-
alent to calling matches with its arguments combined with &&.
This part of the documentation has pages documenting particular sbt topics in
detail. Before reading anything in here, you will need the information in the
Getting Started Guide as a foundation.
Tasks
Tasks and settings are introduced in the getting started guide, which you may
wish to read first. This page has additional details and background and is
intended more as a reference.
Introduction
Both settings and tasks produce values, but there are two major differences
between them:
1. Settings are evaluated at project load time. Tasks are executed on demand,
often in response to a command from the user.
2. At the beginning of project loading, settings and their dependencies are
fixed. Tasks can introduce new tasks during execution, however.
179
Features
1. By integrating with the settings system, tasks can be added, removed, and
modified as easily and flexibly as settings.
2. Input Tasks use parser combinators to define the syntax for their argu-
ments. This allows flexible syntax and tab-completions in the same way
as Commands.
3. Tasks produce values. Other tasks can access a task’s value by calling
value on it within a task definition.
4. Dynamically changing the structure of the task graph is possible. Tasks
can be injected into the execution graph based on the result of another
task.
5. There are ways to handle task failure, similar to try/catch/finally.
6. Each task has access to its own Logger that by default persists the logging
for that task at a more verbose level than is initially printed to the screen.
Defining a Task
Run “sbt hello” from command line to invoke the task. Run “sbt tasks” to see
this task listed.
Define the key To declare a new task, define a lazy val of type TaskKey:
The name of the val is used when referring to the task in Scala code and at
the command line. The string passed to the taskKey method is a description
of the task. The type parameter passed to taskKey (here, Int) is the type of
value produced by the task.
We’ll define a couple of other keys for the examples:
180
lazy val intTask = taskKey[Int]("An int task")
lazy val stringTask = taskKey[String]("A string task")
Implement the task There are three main parts to implementing a task once
its key is defined:
1. Determine the settings and other tasks needed by the task. They are the
task’s inputs.
2. Define the code that implements the task in terms of these inputs.
3. Determine the scope the task will go in.
These parts are then combined just like the parts of a setting are combined.
intTask := 1 + 2
stringTask := System.getProperty("user.name")
sampleTask := {
val sum = 1 + 2
println("sum: " + sum)
sum
}
Tasks with inputs Tasks with other tasks or settings as inputs are also
defined using :=. The values of the inputs are referenced by the value method.
This method is special syntax and can only be called when defining a task, such
as in the argument to :=. The following defines a task that adds one to the
value produced by intTask and returns the result.
sampleTask := intTask.value + 1
181
Task Scope As with settings, tasks can be defined in a specific scope. For
example, there are separate compile tasks for the compile and test scopes.
The scope of a task is defined the same as for a setting. In the following example,
test:sampleTask uses the result of compile:intTask.
1. Assignment methods have the lowest precedence. These are methods with
names ending in =, except for !=, <=, >=, and names that start with =.
2. Methods starting with a letter have the next highest precedence.
3. Methods with names that start with a symbol and aren’t included in
1. have the highest precedence. (This category is divided further accord-
ing to the specific character it starts with. See the Scala specification
for details.)
Note that whenever .value is used, it must be within a task definition, such as
within Def.task above or as an argument to :=.
182
Modifying an Existing Task In the general case, modify a task by declaring
the previous task as an input.
// initial definition
intTask := 3
Completely override a task by not declaring the previous task as an input. Each
of the definitions in the following example completely overrides the previous one.
That is, when intTask is run, it will only print #3.
intTask := {
println("#1")
3
}
intTask := {
println("#2")
5
}
intTask := {
println("#3")
sampleTask.value - 3
}
Introduction The general form of an expression that gets values from multi-
ple scopes is:
<setting-or-task>.all(<scope-filter>).value
Example A common scenario is getting the sources for all subprojects for
processing all at once, such as passing them to scaladoc. The task that we want
to obtain values for is sources and we want to get the values in all non-root
projects and in the Compile configuration. This looks like:
183
lazy val core = project
Unspecified filters If the task filter is not specified, as in the example above,
the default is to select scopes without a specific task (global). Similarly, an
unspecified configuration filter will select scopes in the global configuration. The
project filter should usually be explicit, but if left unspecified, the current project
context will be used.
184
Combining ScopeFilters ScopeFilters may be combined with the &&, ||,
--, and - methods:
For example, the following selects the scope for the Compile and Test configu-
rations of the core project and the global configuration of the util project:
More operations The all method applies to both settings (values of type
Initialize[T]) and tasks (values of type Initialize[Task[T]]). It returns a
setting or task that provides a Seq[T], as shown in this table:
Target
Result
Initialize[T]
Initialize[Seq[T]]
Initialize[Task[T]]
Initialize[Task[Seq[T]]]
This means that the all method can be combined with methods that construct
tasks and settings.
Missing values Some scopes might not define a setting or task. The ? and
?? methods can help in this case. They are both defined on settings and tasks
and indicate what to do when a key is undefined.
?
On a setting or task with underlying type T, this accepts no arguments and
returns a setting or task (respectively) of type Option[T]. The result is None if
the setting/task is undefined and Some[T] with the value if it is.
??
On a setting or task with underlying type T, this accepts an argument of type
T and uses this argument if the setting/task is undefined.
The following contrived example sets the maximum errors to be the maximum
of all aggregates of the current project.
185
maxErrors := {
// select the transitive aggregates for this project, but not the project itself
val filter: ScopeFilter =
ScopeFilter( inAggregates(ThisProject, includeRoot=false) )
// get the configured maximum errors in each selected scope,
// using 0 if not defined in a scope
val allVersions: Seq[Int] =
(maxErrors ?? 0).all(filter).value
allVersions.max
}
Multiple values from multiple scopes The target of all is any task or
setting, including anonymous ones. This means it is possible to get multiple
values at once without defining a new task or setting in each scope. A common
use case is to pair each value obtained with the project, configuration, or full
scope it came from.
For example, the following defines a task that prints non-Compile configurations
that define sbt plugins. This might be used to identify an incorrectly configured
build (or not, since this is a fairly contrived example):
checkPluginsTask := {
186
val oddPlugins: Seq[(String, Set[String])] =
pluginsWithConfig.all(filter).value
// Print each configuration that defines sbt plugins
for( (config, plugins) <- oddPlugins if plugins.nonEmpty )
println(s"$config defines sbt plugins: ${plugins.mkString(", ")}")
}
The examples in this section use the task keys defined in the previous section.
Streams: Per-task logging Per-task loggers are part of a more general sys-
tem for task-specific data called Streams. This allows controlling the verbosity
of stack traces and logging individually for tasks as well as recalling the last
logging for a task. Tasks also have access to their own persisted binary or text
data.
To use Streams, get the value of the streams task. This is a special task that
provides an instance of TaskStreams for the defining task. This type provides
access to named binary and text streams, named loggers, and a default logger.
The default Logger, which is the most commonly used aspect, is obtained by
the log method:
myTask := {
val s: TaskStreams = streams.value
s.log.debug("Saying hi...")
s.log.info("Hello!")
}
traceLevel in myTask := 5
To obtain the last logging output from a task, use the last command:
$ last myTask
[debug] Saying hi...
[info] Hello!
187
Dynamic Computations with Def.taskDyn
It can be useful to use the result of a task to determine the next tasks to evaluate.
This is done using Def.taskDyn. The result of taskDyn is called a dynamic task
because it introduces dependencies at runtime. The taskDyn method supports
the same syntax as Def.task and := except that you return a task instead of a
plain value.
For example,
myTask := {
val num = dynamic.value
println(s"Number selected was $num")
}
failure The failure method creates a new task that returns the Incomplete
value when the original task fails to complete normally. If the original task
succeeds, the new task fails. Incomplete is an exception with information about
188
any tasks that caused the failure and any underlying exceptions thrown during
task execution.
For example:
intTask := error("Failed.")
intTask := {
println("Ignoring failure: " + intTask.failure.value)
3
}
This overrides the intTask so that the original exception is printed and the
constant 3 is returned.
failure does not prevent other tasks that depend on the target from failing.
Consider the following example:
The following table lists the results of each task depending on the initially in-
voked task:
invoked task
intTask result
aTask result
bTask result
cTask result
overall result
intTask
failure
not run
not run
189
not run
failure
aTask
failure
success
not run
not run
success
bTask
failure
not run
failure
not run
failure
cTask
failure
success
failure
failure
failure
intTask
success
not run
not run
not run
success
aTask
success
failure
not run
not run
190
failure
bTask
success
not run
success
not run
success
cTask
success
failure
success
failure
failure
The overall result is always the same as the root task (the directly invoked task).
A failure turns a success into a failure, and a failure into an Incomplete. A
normal task definition fails when any of its inputs fail and computes its value
otherwise.
result The result method creates a new task that returns the full
Result[T] value for the original task. Result has the same structure as
Either[Incomplete, T] for a task result of type T. That is, it has two
subtypes:
Thus, the task created by result executes whether or not the original task
succeeds or fails.
For example:
intTask := error("Failed.")
191
println("Using successful result: " + v)
v
}
This overrides the original intTask definition so that if the original task fails,
the exception is printed and the constant 3 is returned. If it succeeds, the value
is printed and returned.
andFinally The andFinally method defines a new task that runs the original
task and evaluates a side effect regardless of whether the original task succeeded.
The result of the task is the result of the original task. For example:
intTask := intTaskImpl.value
This modifies the original intTask to always print “andFinally” even if the task
fails.
Note that andFinally constructs a new task. This means that the new task has
to be invoked in order for the extra block to run. This is important when calling
andFinally on another task instead of overriding a task like in the previous
example. For example, consider this code:
otherIntTask := intTaskImpl.value
intTask()
It is obvious here that calling intTask() will never result in “finally” being
printed.
192
Input Tasks
Input Tasks parse user input and produce a task to run. Parsing Input describes
how to use the parser combinators that define the input syntax and tab comple-
tion. This page describes how to hook those parser combinators into the input
task system.
Input Keys
A key for an input task is of type InputKey and represents the input task like a
SettingKey represents a setting or a TaskKey represents a task. Define a new
input task key using the inputKey.apply factory method:
The definition of an input task is similar to that of a normal task, but it can
also use the result of a
Parser applied to user input. Just as the special value method gets the value
of a setting or task, the special parsed method gets the result of a Parser.
demo := {
// get the result of parsing
val args: Seq[String] = spaceDelimited("<arg>").parsed
// Here, we also use the value of the `scalaVersion` setting
println("The current Scala version is " + scalaVersion.value)
println("The arguments to demo were:")
args foreach println
}
193
Input Task using Parsers
The Parser provided by the spaceDelimited method does not provide any flex-
ibility in defining the input syntax. Using a custom parser is just a matter of
defining your own Parser as described on the Parsing Input page.
Constructing the Parser The first step is to construct the actual Parser
by defining a value of one of the following types:
We already saw an example of the first case with spaceDelimited, which doesn’t
use any settings in its definition. As an example of the third case, the following
defines a contrived Parser that uses the project’s Scala and sbt version set-
tings as well as the state. To use these settings, we need to wrap the Parser
construction in Def.setting and get the setting values with the special value
method:
import complete.DefaultParsers._
This Parser definition will produce a value of type (String,String). The input
syntax defined isn’t very flexible; it is just a demonstration. It will produce one
of the following values for a successful parse (assuming the current Scala version
is 2.10.3, the current sbt version is 0.13.5, and there are 3 commands left to run):
Again, we were able to access the current Scala and sbt version for the project
because they are settings. Tasks cannot be used to define the parser.
Constructing the Task Next, we construct the actual task to execute from
the result of the Parser. For this, we define a task as usual, but we can access
the result of parsing via the special parsed method on Parser.
194
The following contrived example uses the previous example’s output (of type
(String,String)) and the result of the package task to print some information
to the screen.
demo := {
val (tpe, value) = parser.parsed
println("Type: " + tpe)
println("Value: " + value)
println("Packaged: " + packageBin.value.getAbsolutePath)
}
1. You can use other settings (via Initialize) to construct an input task.
2. You can use the current State to construct the parser.
3. The parser accepts user input and provides tab completion.
4. The parser produces the task to run.
So, you can use settings or State to construct the parser that defines an input
task’s command line syntax. This was described in the previous section. You
can then use settings, State, or user input to construct the task to run. This
is implicit in the input task syntax.
195
In both situations, the underlying Parser is sequenced with other parsers in
the input task definition. In the case of .evaluated, the generated task is
evaluated.
The following example applies the run input task, a literal separator parser --,
and run again. The parsers are sequenced in order of syntactic appearance, so
that the arguments before -- are passed to the first run and the ones after are
passed to the second.
run2 := {
val in Compile).evaluated
val sep = separator.parsed
val two = (run in Compile).evaluated
}
For a main class Demo that echoes its arguments, this looks like:
$ sbt
> run2 a b -- c d
[info] Running Demo c d
[info] Running Demo a b
c
d
a
b
Preapplying input
• partialInput applies the input and allows further input, such as from
the command line
• fullInput applies the input and terminates parsing, so that further input
is not accepted
196
In each case, the input is applied to the input task’s parser. Because input tasks
handle all input after the task name, they usually require initial whitespace to
be provided in the input.
Consider the example in the previous section. We can modify it so that we:
• Explicitly specify all of the arguments to the first run. We use name and
version to show that settings can be used to define and modify parsers.
• Define the initial arguments passed to the second run, but allow further
input on the command line.
// The argument string for the first run task is ' <name> <version>'
lazy val firstInput: Initialize[String] =
Def.setting(s" ${name.value} ${version.value}")
// Make the first arguments to the second run task ' red blue'
lazy val secondInput: String = " red blue"
run2 := {
val in Compile).fullInput(firstInput.value).evaluated
val two = (run in Compile).partialInput(secondInput).evaluated
}
For a main class Demo that echoes its arguments, this looks like:
$ sbt
> run2 green
[info] Running Demo demo 1.0
[info] Running Demo red blue green
demo
1.0
red
blue
green
The previous section showed how to derive a new InputTask by applying in-
put. In this section, applying input produces a Task. The toTask method on
197
Initialize[InputTask[T]] accepts the String input to apply and produces a
task that can be used normally. For example, the following defines a plain task
runFixed that can be used by other tasks or run directly without providing any
input, :
lazy val runFixed = taskKey[Unit]("A task that hard codes the values to `run`")
runFixed := {
val _ = (run in Compile).toTask(" blue green").value
println("Done!")
}
For a main class Demo that echoes its arguments, running runFixed looks like:
$ sbt
> runFixed
[info] Running Demo blue green
blue
green
Done!
Each call to toTask generates a new task, but each task is configured the same
as the original InputTask (in this case, run) but with different input applied.
For example, :
lazy val runFixed2 = taskKey[Unit]("A task that hard codes the values to `run`")
runFixed2 := {
val x = (run in Compile).toTask(" blue green").value
val y = (run in Compile).toTask(" red orange").value
println("Done!")
}
The different toTask calls define different tasks that each run the project’s main
class in a new jvm. That is, the fork setting configures both, each has the same
classpath, and each run the same main class. However, each task passes different
arguments to the main class. For a main class Demo that echoes its arguments,
the output of running runFixed2 might look like:
$ sbt
> runFixed2
[info] Running Demo blue green
198
[info] Running Demo red orange
blue
green
red
orange
Done!
Commands
What is a “command”?
Introduction
In sbt, the syntax part, including tab completion, is specified with parser com-
binators. If you are familiar with the parser combinators in Scala’s standard
library, these are very similar. The action part is a function (State, T) =>
State, where T is the data structure produced by the parser. See the Parsing
Input page for how to use the parser combinators.
State provides access to the build state, such as all registered Commands, the
remaining commands to execute, and all project-related information. See States
and Actions for details on State.
Finally, basic help information may be provided that is used by the help com-
mand to display command help.
199
Defining a Command
200
Full Example
import sbt._
import Keys._
201
}
state
}
202
Parsing and tab completion
This page describes the parser combinators in sbt. These parser combinators are
typically used to parse user input and provide tab completion for Input Tasks
and Commands. If you are already familiar with Scala’s parser combinators,
the methods are mostly the same except that their arguments are strict. There
are two additional methods for controlling tab completion that are discussed at
the end of the section.
Parser combinators build up a parser from smaller parsers. A Parser[T] in its
most basic usage is a function String => Option[T]. It accepts a String to
parse and produces a value wrapped in Some if parsing succeeds or None if it
fails. Error handling and tab completion make this picture more complicated,
but we’ll stick with Option for this discussion.
The following examples assume the imports: :
import sbt._
import complete.DefaultParsers._
Basic parsers
// A parser that succeeds if the input is 'x', returning the Char 'x'
// and failing otherwise
val singleChar: Parser[Char] = 'x'
// A parser that succeeds if the input is "blue", returning the String "blue"
// and failing otherwise
val litString: Parser[String] = "blue"
// A parser that succeeds if the character is a digit, returning the matched Char
// The second argument, "digit", describes the parser and is used in error messages
val digit: Parser[Char] = charClass( (c: Char) => c.isDigit, "digit")
// A parser that produces the value 3 for an empty input string, fails otherwise
val alwaysSucceed: Parser[Int] = success( 3 )
203
Built-in parsers
Combining parsers
// A parser that matches "fg" or "bg", a space, and then the color, returning the matched va
// ~ is an alias for Tuple2.
val setColor: Parser[String ~ Char ~ String] =
select ~ ' ' ~ color
// Often, we don't care about the value matched by a parser, such as the space above
// For this, we can use ~> or <~, which keep the result of
// the parser on the right or left, respectively
val setColor2: Parser[String ~ String] = select ~ (' ' ~> color)
204
Transforming results
A key aspect of parser combinators is transforming results along the way into
more useful data structures. The fundamental methods for this are map and
flatMap. Here are examples of map and some convenience methods implemented
on top of map.
// Apply the `digits` parser and apply the provided function to the matched
// character sequence
val num: Parser[Int] = digits map { (chars: Seq[Char]) => chars.mkString.toInt }
// Match a digit character, returning the matched character or return '0' if the input is no
val digitWithDefault: Parser[Char] = charClass(_.isDigit, "digit") ?? '0'
Most parsers have reasonable default tab completion behavior. For example,
the string and character literal parsers will suggest the underlying literal for
an empty input string. However, it is impractical to determine the valid com-
pletions for charClass, since it accepts an arbitrary predicate. The examples
method defines explicit completions for such a parser:
Tab completion will use the examples as suggestions. The other method con-
trolling tab completion is token. The main purpose of token is to determine
the boundaries for suggestions. For example, if your parser is:
then the potential completions on empty input are: console fg green fg blue
bg green bg blue
Typically, you want to suggest smaller segments or the number of suggestions
becomes unmanageable. A better parser is:
205
token( ("fg" | "bg") ~ ' ') ~ token("green" | "blue")
State is the entry point to all available information in sbt. The key methods
are:
The action part of a command performs work and transforms State. The follow-
ing sections discuss State => State transformations. As mentioned previously,
a command will typically handle a parsed value as well: (State, T) => State.
Command-related data
This takes the current commands, appends new commands, and drops dupli-
cates. Alternatively, State has a convenience method for doing the above:
206
Some examples of functions that modify the remaining commands to execute:
The first adds a command that will run after all currently specified commands
run. The second inserts a command that will run next. The remaining com-
mands will run after the inserted command completes.
To indicate that a command has failed and execution should not continue, return
state.fail.
Project-related data
Extracted provides:
207
Project data
Here, a SettingKey[T] is typically obtained from Keys and is the same type
that is used to define settings in .sbt files, for example. Scope selects the scope
the key is obtained for. There are convenience overloads of in that can be used
to specify only the required scope axes. See Structure.scala for where in and
other parts of the settings interface are defined. Some examples:
import Keys._
val extracted: Extracted
import extracted._
// get the package options for the `test:packageSrc` task or Nil if none are defined
val pkgOpts: Seq[PackageOption] = packageOptions in (currentRef, Test, packageSrc) get struc
A URI identifies a build and root identifies the initial build loaded. Load-
edBuildUnit provides information about a single build. The key members of
LoadedBuildUnit are:
208
Classpaths
Running tasks
It can be useful to run a specific project task from a command (not from another
task) and get its result. For example, an IDE-related command might want
to get the classpath from a project or a task might analyze the results of a
compilation. The relevant method is Project.evaluateTask, which has the
following signature:
For example,
// This selects the main 'compile' task for the current project.
// The value produced by 'compile' is of type inc.Analysis,
// which contains information about the compiled code.
val taskKey = Keys.compile in Compile
209
For getting the test classpath of a specific project, use this key:
To access the current State from a task, use the state task as an input. For
example,
Tasks/Settings: Motivation
This page motivates the task and settings system. You should already know
how to use tasks and settings, which are described in the getting started guide
and on the Tasks page.
An important aspect of the task system is to combine two common, related
steps in a build:
1. Dependency declarations
2. Some form of shared state
210
makeFoo()
doSomething(foo)
This example is rather exaggerated in its badness, but I claim it is nearly the
same situation as our two step task definitions. Particular reasons this is bad
include:
The first point is like declaring a task dependency, the second is like two tasks
modifying the same state (either project variables or files), and the third is a
consequence of unsynchronized, shared state.
In Scala, we have the built-in functionality to easily fix this: lazy val.
doSomething(foo)
Here, lazy val gives us thread safety, guaranteed initialization before access,
and immutability all in one, DRY construct. The task system in sbt does the
same thing for tasks (and more, but we won’t go into that here) that lazy val
did for our bad example.
A task definition must declare its inputs and the type of its output. sbt will
ensure that the input tasks have run and will then provide their results to the
function that implements the task, which will generate its own result. Other
tasks can use this result and be assured that the task has run (once) and be
thread-safe and typesafe in the process.
The general form of a task definition looks like:
myTask := {
val a: A = aTask.value
val b: B = bTask.value
... do something with a, b and generate a result ...
}
(This is only intended to be a discussion of the ideas behind tasks, so see the sbt
Tasks page for details on usage.) Here, aTask is assumed to produce a result of
type A and bTask is assumed to produce a result of type B.
211
Application
As an example, consider generating a zip file containing the binary jar, source jar,
and documentation jar for your project. First, determine what tasks produce the
jars. In this case, the input tasks are packageBin, packageSrc, and packageDoc
in the main Compile scope. The result of each of these tasks is the File for the
jar that they generated. Our zip file task is defined by mapping these package
tasks and including their outputs in a zip file. As good practice, we then return
the File for this zip so that other tasks can map on the zip task.
zip := {
val bin: File = (packageBin in Compile).value
val src: File = (packageSrc in Compile).value
val doc: File = (packageDoc in Compile).value
val out: File = zipPath.value
val inputs: Seq[(File,String)] = Seq(bin, src, doc) x Path.flat
IO.zip(inputs, out)
out
}
The val inputs line defines how the input files are mapped to paths in the
zip. See Mapping Files for details. The explicit types are not required, but are
included for clarity.
The zipPath input would be a custom task to define the location of the zip file.
For example:
Anything that is necessary for building the project should go in project/. This
includes things like the web plugin. ~/.sbt/ should contain local customizations
and commands for working with a build, but are not necessary. An example is
an IDE plugin.
212
Local settings
There are two options for settings that are specific to a user. An example of
such a setting is inserting the local Maven repository at the beginning of the
resolvers list:
resolvers := {
val localMaven = "Local Maven Repository" at "file://"+Path.userHome.absolutePath+"/.m2/re
localMaven +: resolvers.value
}
.sbtrc
Put commands to be executed when sbt starts up in a .sbtrc file, one per
line. These commands run before a project is loaded and are useful for defining
aliases, for example. sbt executes commands in $HOME/.sbtrc (if it exists) and
then <project>/.sbtrc (if it exists).
Generated files
Don’t hard code constants, like the output directory target/. This is especially
important for plugins. A user might change the target setting to point to
build/, for example, and the plugin needs to respect that. Instead, use the
setting, like:
213
Don’t “mutate” files
This arrangement is not always possible, but it should be the rule and not the
exception.
file("/home/user/A.scala")
base / "A.scala"
214
This is related to the no hard coding best practice because the proper way
involves referencing the baseDirectory setting. For example, the following
defines the myPath setting to be the <base>/licenses/ directory.
In Java (and thus in Scala), a relative File is relative to the current working di-
rectory. The working directory is not always the same as the build root directory
for a number of reasons.
The only exception to this rule is when specifying the base directory for a Project.
Here, sbt will resolve a relative File against the build root directory for you for
convenience.
Parser combinators
Plugins
There’s a getting started page focused on using existing plugins, which you may
want to read first.
A plugin is a way to use external code in a build definition. A plugin can
be a library used to implement a task (you might use Knockoff to write a
markdown processing task). A plugin can define a sequence of sbt settings that
are automatically added to all projects or that are explicitly declared for selected
projects. For example, a plugin might add a proguard task and associated
215
(overridable) settings. Finally, a plugin can define new commands (via the
commands setting).
sbt 0.13.5 intoduces auto plugin with improved dependency management among
the plugins and explicitly scoped auto importing. Going forward, our recom-
mendation is to migrate to the auto plugins. The Plugins Best Practices page
describes the currently evolving guidelines to writing sbt plugins. See also the
general best practices.
Alternatively, you can create project/plugins.sbt with all of the desired sbt
plugins, any general dependencies, and any necessary repositories:
// plain library (not an sbt plugin) for use in the build definition
libraryDependencies += "org.example" % "utilities" % "1.3"
Many of the auto plugins automatically adds settings into projects, however,
some may require explicit enablement. Here’s an example:
See using plugins in the Getting Started guide for more details on using plugins.
216
By Description
• Automatically import selective names to .sbt files and the eval and set
commands.
• Specify plugin dependencies to other auto plugins.
• Automatically activate itself when all dependencies are present.
• Specify projectSettings, buildSettings, and globalSettings as ap-
propriate.
Plugin dependencies
217
1. add the setting sequence from the dependency as part of its own setting
sequence, or
2. tell the build users to include them in the right order.
This will pull in the right setting sequence from the plugins in the right order.
The key notion here is you declare the plugins you want, and sbt can fill in the
gap.
A plugin implementation is not required to produce an auto plugin, however. It
is a convenience for plugin consumers and because of the automatic nature, it
is not always appropriate.
A minimal sbt plugin is a Scala library that is built against the version of
Scala that sbt runs (currently, 2.10.3) or a Java library. Nothing special needs
to be done for this type of library. A more typical plugin will provide sbt
tasks, commands, or settings. This kind of plugin may provide these settings
automatically or make them available for the user to explicitly integrate.
To make an auto plugin, create a project and configure sbtPlugin to true.
sbtPlugin := true
Then, write the plugin code and publish your project to a repository. The plugin
can be used as described in the previous section.
First, in an appropriate namespace, define your auto plugin object by extending
sbt.AutoPlugin.
218
projectSettings and buildSettings With auto plugins, all provided set-
tings (e.g. assemblySettings) are provided by the plugin directly via the
projectSettings method. Here’s an example plugin that adds a command
named hello to sbt projects:
package sbthello
import sbt._
import Keys._
219
package sbtless
import sbt._
import Keys._
object SbtLessPlugin extends AutoPlugin {
override def requires = SbtJsTaskPlugin
override lazy val projectSettings = ...
}
The requires method returns a value of type Plugins, which is a DSL for
constructing the dependency list. The requires method typically contains one
of the following values:
Root plugins and triggered plugins Some plugins should always be explic-
itly enabled on projects. we call these root plugins, i.e. plugins that are “root”
nodes in the plugin depdendency graph. An auto plugin is by default a root
plugin.
Auto plugins also provide a way for plugins to automatically attach themselves
to projects if their dependencies are met. We call these triggered plugins, and
they are created by overriding the trigger method.
For example, we might want to create a triggered plugin that can append
commands automatically to the build. To do this, set the requires method
to return empty (this is the default), and override the trigger method with
allRequirements.
package sbthello
import sbt._
import Keys._
220
The build user still needs to include this plugin in project/plugins.sbt, but
it is no longer needed to be included in build.sbt. This becomes more in-
teresting when you do specify a plugin with requirements. Let’s modify the
SbtLessPlugin so that it depends on another plugin:
package sbtless
import sbt._
import Keys._
object SbtLessPlugin extends AutoPlugin {
override def trigger = allRequirements
override def requires = SbtJsTaskPlugin
override lazy val projectSettings = ...
}
As it turns out, PlayScala plugin (in case you didn’t know, the Play framework
is an sbt plugin) lists SbtJsTaskPlugin as one of it required plugins. So, if we
define a build.sbt with:
package sbthello
import sbt._
import Keys._
221
override lazy val buildSettings = Seq(
greeting := "Hi!",
commands += helloCommand)
lazy val helloCommand =
Command.command("hello") { (state: State) =>
println(greeting.value)
state
}
}
sbtPlugin := true
name := "sbt-obfuscate"
organization := "org.example"
ObfuscatePlugin.scala:
package sbtobfuscate
import sbt._
222
import autoImport._
override def requires = sbt.plugins.JvmPlugin
object Obfuscate {
def apply(sources: Seq[File]): Seq[File] := sources
}
Usage example A build definition that uses the plugin might look like.
obfuscate.sbt:
This plugin will be available for every sbt project for the current user.
In addition:
223
3. sbt will rebuild the plugin and use it for the project. Additionally, the plu-
gin will be available in other projects on the machine without recompiling
again. This approach skips the overhead of publishLocal and cleaning
the plugins directory of the project using the plugin.
$ sbt
> reload plugins
[info] Set current project to default (in build file:/Users/sbt/demo2/project/)
>
Then, we can add dependencies like usual and save them to project/plugins.sbt.
It is useful, but not required, to run update to verify that the dependencies are
correct.
224
> set libraryDependencies += "org.clapper" %% "grizzled-scala" % "1.0.4"
...
> update
...
> session save
...
1d) Project dependency This variant shows how to use sbt’s external
project support to declare a source dependency on a plugin. This means that
the plugin will be built from source and used on the classpath.
Edit project/plugins.sbt
One caveat to using this method is that the local sbt will try to run the remote
plugin’s build. It is quite possible that the plugin’s own build uses a different
sbt version, as many plugins cross-publish for several sbt versions. As such, it
is recommended to stick with binary artifacts when possible.
225
In a build.sbt file:
import grizzled.sys._
import OperatingSystem._
libraryDependencies ++=
if(os == Windows)
Seq("org.example" % "windows-only" % "1.0")
else
Seq.empty
Best Practices
If you’re a plugin writer, please consult the Plugins Best Practices page; it
contains a set of guidelines to help you ensure that your plugin is consistent and
plays well with other plugins.
This page is intended primarily for sbt plugin authors. This page assumes you’ve
read using plugins and Plugins.
A plugin developer should strive for consistency and ease of use. Specifically:
• Plugins should play well with other plugins. Avoiding namespace clashes
(in both sbt and Scala) is paramount.
• Plugins should follow consistent conventions. The experiences of an sbt
user should be consistent, no matter what plugins are pulled in.
Make sure people can find your plugin. Here are some of the recommended
steps:
226
Don’t use default package
Users who have their build files in some package will not be able to use your
plugin if it’s defined in default (no-name) package.
Your plugin should fit in naturally with the rest of the sbt ecosystem. The first
thing you can do is to avoid defining commands, and use settings and tasks
and task-scoping instead (see below for more on task-scoping). Most of the
interesting things in sbt like compile, test and publish are provided using
tasks. Tasks can take advantage of duplication reduction and parallel execution
by the task engine. With features like ScopeFilter, many of the features that
previously required commands are now possible using tasks.
Settings can be composed from other settings and tasks. Tasks can be composed
from other tasks and input tasks. Commands, on the other hand, cannot be
composed from any of the above. In general, use the minimal thing that you
need. One legitimate use of commands may be using plugin to access the build
definition itself not the code. sbt-inspectr was implemented using a command
before it became inspect tree.
Use sbt.AutoPlugin
sbt has a number of predefined keys. Where possible, reuse them in your plugin.
For instance, don’t define:
Sometimes, you need a new key, because there is no existing sbt key. In this
case, use a plugin-specific prefix.
227
package sbtobfuscate
In this approach, every lazy val starts with obfuscate. A user of the plugin
would refer to the settings like this:
obfuscateStylesheet := file("something.txt")
Configuration advices
If your plugin introduces either a new set of source code or its own library
dependencies, only then you want your own configuration.
package sbtwhatever
object autoImport {
228
// BAD sample
lazy val Whatever = config("whatever") extend(Compile)
lazy val dude = settingKey[String]("A plugin specific key")
}
import autoImport._
override lazy val projectSettings = Seq(
dude in Whatever := "your opinion man" // DON'T DO THIS
)
}
package sbtfuzz
object autoImport {
lazy val Fuzz = config("fuzz") extend(Compile)
}
import autoImport._
229
override lazy val projectSettings = inConfig(Fuzz)(baseFuzzSettings)
}
Provide raw settings and configured settings Split your settings by the
configuration axis like so:
package sbtobfuscate
object autoImport {
lazy val obfuscate = taskKey[Seq[File]]("obfuscate the source")
lazy val obfuscateStylesheet = settingKey[File]("obfuscate stylesheet")
}
import autoImport._
lazy val baseObfuscateSettings: Seq[Def.Setting[_]] = Seq(
obfuscate := Obfuscate((sources in obfuscate).value),
sources in obfuscate := sources.value
)
override lazy val projectSettings = inConfig(Compile)(baseObfuscateSettings)
}
230
The baseObfuscateSettings value provides base configuration for the plu-
gin’s tasks. This can be re-used in other configurations if projects require it.
The obfuscateSettings value provides the default Compile scoped settings
for projects to use directly. This gives the greatest flexibility in using features
provided by a plugin. Here’s how the raw settings may be reused:
import sbtobfuscate.ObfuscatePlugin
Using a “main” task scope for settings Sometimes you want to define
some settings for a particular “main” task in your plugin. In this instance, you
can scope your settings using the task itself. See the baseObfuscateSettings:
In the above example, sources in obfuscate is scoped under the main task,
obfuscate.
There may be times when you need to muck with globalSettings. The general
rule is be careful what you touch.
When overriding global settings, care should be taken to ensure previous settings
from other plugins are not ignored. e.g. when creating a new onLoad handler,
ensure that the previous onLoad handler is not removed.
package sbtsomething
231
Sbt Launcher
The sbt launcher provides a generic container that can load and run programs
resolved using the Ivy dependency manager. Sbt uses this as its own deployment
mechanism.
Overview
A user downloads the launcher jar and creates a script to run it. In this docu-
mentation, the script will be assumed to be called launch. For unix, the script
would look like: java -jar sbt-launcher.jar "$@"
The user can now launch servers and applications which provide sbt launcher
configuration.
Servers The sbt launcher can be used to launch and discover running servers
on the system. The launcher can be used to launch servers similarly to applica-
tions. However, if desired, the launcher can also be used to ensure that only one
instance of a server is running at time. This is done by having clients always
use the launcher as a service locator.
To discover where a server is running (or launch it if it is not run-
ning), the user downloads the configuration file for the server (call it
my.server.configuration) and creates a script to discover the server (call it
find-myserver): launch --locate @my.server.properties.
This command will print out one string, the URI at which to reach the server,
e.g. sbt://127.0.0.1:65501. Clients should use the IP/port to connect to to
the server and initiate their connection.
When using the locate feature, the sbt launcher makes these following restric-
tions to servers:
232
• The Server must have a starting class that extends the xsbti.ServerMain
class
• The Server must have an entry point (URI) that clients can use to detect
the server
• The server must have defined a lock file which the launcher can use to
ensure that only one instance is running at a time
• The filesystem on which the lock file resides must support locking.
• The server must allow the launcher to open a socket against the port
without sending any data. This is used to check if a previous server is still
alive.
233
Make the entry point to your class implement ‘xsbti.AppMain’. An example
that uses some of the information:
package xsbt.test
class Main extends xsbti.AppMain
{
def run(configuration: xsbti.AppConfiguration) =
{
// get the version of Scala used to launch the application
val scalaVersion = configuration.provider.scalaProvider.version
// demonstrate the ability to reboot the application into different versions of Scal
// and how to return the code to exit with
scalaVersion match
{
case "2.9.3" =>
new xsbti.Reboot {
def arguments = configuration.arguments
def baseDirectory = configuration.baseDirectory
def scalaVersion = "2.10.2
def app = configuration.provider.id
}
case "2.10.2" => new Exit(1)
case _ => new Exit(0)
}
}
class Exit(val code: Int) extends xsbti.Exit
}
Next, define a configuration file for the launcher. For the above class, it might
look like:
Then, publishLocal or +publishLocal the application to make it available.
For more information, see Launcher Configuration.
234
• The user downloads the launcher jar and you provide the configuration
file.
– The user needs to run java -Dsbt.boot.properties=your.boot.properties
-jar launcher.jar.
– The user already has a script to run the launcher (call it ‘launch’).
The user needs to run launch @your.boot.properties your-arg-1
your-arg-2
Execution Let’s review what’s happening when the launcher starts your ap-
plication.
On startup, the launcher searches for its configuration and then parses it. Once
the final configuration is resolved, the launcher proceeds to obtain the necessary
jars to launch the application. The boot.directory property is used as a base
directory to retrieve jars to. Locking is done on the directory, so it can be shared
system-wide. The launcher retrieves the requested version of Scala to
${boot.directory}/${scala.version}/lib/
If this directory already exists, the launcher takes a shortcut for startup perfor-
mance and assumes that the jars have already been downloaded. If the directory
does not exist, the launcher uses Apache Ivy to resolve and retrieve the jars. A
similar process occurs for the application itself. It and its dependencies are
retrieved to
${boot.directory}/${scala.version}/${app.org}/${app.name}/.
Once all required code is downloaded, the class loaders are set up. The launcher
creates a class loader for the requested version of Scala. It then creates a child
class loader containing the jars for the requested ‘app.components’ and with the
paths specified in app.resources. An application that does not use components
will have all of its jars in this class loader.
The main class for the application is then instantiated. It must be a public class
with a public no-argument constructor and must conform to xsbti.AppMain.
The run method is invoked and execution passes to the application. The argu-
ment to the ‘run’ method provides configuration information and a callback to
obtain a class loader for any version of Scala that can be obtained from a repos-
itory in [repositories]. The return value of the run method determines what
is done after the application executes. It can specify that the launcher should
restart the application or that it should exit with the provided exit code.
235
Sbt Launcher Architecture
The sbt launcher is a mechanism whereby modules can be loaded from ivy and
executed within a jvm. It abstracts the mechanism of grabbing and caching jars,
allowing users to focus on what application they want and control its versions.
The launcher’s primary goal is to take configuration for applications, mostly
just ivy coordinates and a main class, and start the application. The launcher
resolves the ivy module, caches the required runtime jars and starts the appli-
cation.
The sbt launcher provides the application with the means to load a different
application when it completes, exit normally, or load additional applications
from inside another.
The sbt launcher provides these core functions:
• Module Resolution
• Classloader Caching and Isolation
• File Locking
• Service Discovery and Isolation
Module Resolution
The primary purpose of the sbt launcher is to resolve applications and run them.
This is done through the [app] configuration section. See launcher configuration
for more information on how to configure module resolution.
Module resolution is performed using the Ivy dependency managemnet library.
This library supports loading artifacts from Maven repositories as well.
The sbt launcher’s classloading structure is different than just starting an ap-
plication in the standard java mechanism. Every application loaded by by the
launcher is given its own classloader. This classloader is a child of the Scala
classloader used by the application. The Scala classloader can see all of the
xsbti.* classes from the launcher itself.
Here’s an example classloader layout from an sbt launched application.
In this diagram, three different applications were loaded. Two of these use the
same version of Scala (2.9.2). In this case, sbt can share the same classloader for
these applications. This has the benefit that any JIT optimisations performed
on scala classes can be re-used between applications thanks to the shared class-
loader.
236
Figure 3: image
237
Caching
The sbt launcher creates a secondary cache on top of Ivy’s own cache. This helps
isolate applications from errors resulting from unstable revisions, like -SNAPSHOT.
For any launched application, the launcher creates a directory to store all its
jars. Here’s an example layout.
Locking
This feature requires a filesystem which supports locking. It is exposed via the
xsbti.GlobalLock interface.
Note: This is both a thread and file lock. Not only are we limiting access to a
single process, but also a single thread within that process.
The launcher also provides a mechanism to ensure that only one instance of a
server is running, while dynamically starting it when a client requests. This is
done through the --locate flag on the launcher. When the launcher is started
with the --locate flag it will do the following:
238
Sbt Launcher Configuration
The launcher may be configured in one of the following ways in increasing order
of precedence:
Example
• org - The organization associated with the Ivy module. (groupId in maven
vernacular)
• name - The name of the Ivy module. (artifactId in maven vernacular)
• version - The revision of the Ivy module.
• class - The name of the “entry point” into the application. An entry
point must be a class which meets one of the following critera
239
– Extends the xsbti.AppMain interface.
– Extends the xsbti.ServerMain interfaces.
– Contains a method with the signature static void main(String[])
– Contains a method with the signature static int main(String[])
– Contains a method with the signature static xsbti.Exit main(String[])
• components - An optional list of additional components that Ivy should
resolve.
• cross-versioned - An optional string denoting how this application is
published. If app.cross-versioned is binary, the resolved module ID is
{app.name+'_'+CrossVersion.binaryScalaVersion(scala.version)}.
If app.cross-versioned is true or full, the resolved module ID is
{app.name+'_'+scala.version}. The scala.version property must be
specified and cannot be auto when cross-versioned.
• resources - An optional list of jar files that should be added to the
application’s classpath.
• classifiers - An optional list of additional classifiers that should be
resolved with this application, e.g. sources.
Besides built in repositories, other repositories can be configured using the fol-
lowing syntax:
The name property is an identifier which Ivy uses to cache modules resolved
from this location. The name should be unique across all repositories.
The url property is the base url where Ivy should look for modules.
The pattern property is an optional specification of how Ivy should look for
modules. By default, the launcher assumes repositories are in the maven style
format.
The skipConsistencyCheck string is used to tell ivy not to validate checksums
and signatures of files it resolves.
240
4. The Boot section The [boot] section is used to configure where the sbt
launcher will store its cache and configuration information. It consists of the
following properties:
• directory - The directory defined here is used to store all cached JARs
resolved launcher.
• properties - (optional) A properties file to use for any read variables.
5. The Ivy section The [ivy] section is used to configure the Ivy depen-
dency manager for resolving applications. It consists of the following properties:
• ivy-home - The home directory for Ivy. This determines where the ivy-
local repository is located, and also where the ivy cache is stored. Defaults
to ~/.ivy2
• ivy.cache-directory - provides an alternative location for the Ivy cache
used by the launcher. This does not automatically set the Ivy cache for
the application, but the application is provided this location through the
AppConfiguration instance.
• checksums - The comma-separated list of checksums that Ivy should use
to verify artifacts have correctly resolved, e.g. md5 or sha1.
• override-build-repos - If this is set, then the isOverrideRepositories
method on xsbti.Launcher interface will return its value. The use of this
method is application specific, but in the case of sbt denotes that the
configuration of repositories in the launcher should override those used by
any build. Applications should respect this convention if they can.
• repository-config - This specifies a configuration location where ivy
repositories can also be configured. If this file exists, then its contents
override the [repositories] section.
6. The Server Section When using the --locate feature of the launcher,
this section configures how a server is started. It consists of the following prop-
erties:
• lock - The file that controls access to the running server. This file will
contain the active port used by a server and must be located on a a
filesystem that supports locking.
• jvmargs - A file that contains line-separated JVM arguments that where
: use when starting the server.
• jvmprops - The location of a properties file that will define override prop-
erties in the server. All properties defined in this file will be set as -D java
properties.
241
Variable Substitution
• ${variable.name}
• ${variable.name-default}
• read(property.name)[default]
This will look in the file configured by boot.properties for a value. If there is
no boot.properties file configured, or the property does not existt, then the
default value is chosen.
Syntax
The configuration file is line-based, read as UTF-8 encoded, and defined by the
following grammar. 'nl' is a newline or end of file and 'text' is plain text
without newlines or the surrounding delimiters (such as parentheses or square
brackets):
242
version: "version" ":" versionSpecification
versionSpecification: readProperty | fixedVersion
readProperty: "read" "(" propertyName ")" "[" default "]"
fixedVersion: text
classifiers: "classifiers" ":" text ("," text)*
homeDirectory: "ivy-home" ":" path
checksums: "checksums" ":" checksum ("," checksum)*
overrideRepos: "override-build-repos" ":" boolean
repoConfig: "repository-config" ":" path
org: "org" ":" text
name: "name" ":" text
class: "class" ":" text
components: "components" ":" component ("," component)*
crossVersioned: "cross-versioned" ":" ("true" | "false" | "none" | "binary" | "full")
resources: "resources" ":" path ("," path)*
repository: ( predefinedRepository | customRepository ) nl
predefinedRepository: "local" | "maven-local" | "maven-central"
customRepository: label ":" url [ ["," ivyPattern] ["," artifactPattern] [", mavenCompatible
property: label ":" propertyDefinition ("," propertyDefinition)*
propertyDefinition: mode "=" (set | prompt)
mode: "quick" | "new" | "fill"
set: "set" "(" value ")"
prompt: "prompt" "(" label ")" ("[" default "]")?
boolean: "true" | "false"
nl: "\r\n" | "\n" | "\r"
path: text
propertyName: text
label: text
default: text
checksum: text
ivyPattern: text
artifactPattern: text
url: text
component: text
Developer’s Guide
This is the set of documentation about the Architecture of sbt. This covers
all the core components of sbt as well as the general notion of how they all
work together. This documentation is suitable for those who wish to have a
deeper understanding of sbt’s core, but already understand the fundamentals of
Setting[_], Task[_] and constructing builds.
243
Core Principles
This document details the core principles overarching sbt’s design and code style.
Sbt’s core principles can be stated quite simply:
With these principles in mind, let’s walk through the core design of sbt.
This is the first piece you hit when starting sbt. Sbt’s command engine is the
means by which it processes user requests using the build state. The command
engine is essentially a means of applying state transformations on the build
state, to execute user requests.
In sbt, commands are functions that take the current build state (sbt.State)
and produce the next state. In other words, they are essentially functions of
sbt.State => sbt.State. However, in reality, Commands are actually string
processors which take some string input and act on it, returning the next build
state.
So, the entirety of sbt is driven off the sbt.State class. Since this class needs
to be resilient in the face of custom code and plugins, it needs a mechanism
to store the state from any potential client. In dynamic languages, this can be
done directly on objects.
A naive approach in Scala is to use a Map<String,Any>. However, this vioaltes
tennant #1: Everythign should have a Type. So, sbt defines a new type of map
called an AttributeMap. An AttributeMap is a key-value storage mechanism
where keys are both strings and expected Types for their value.
Here is what the typesafe AttributeKey key looks like :
These keys store both a label (string) and some runtime type information
(manifest). To put or get something on the AttributeMap, we first need to con-
struct one of these keys. Let’s look at the basic definition of the AttributeMap
:
244
trait AttributeMap {
/** Gets the value of type ``T`` associated with the key ``k`` or ``None`` if no value is associa
* If a key with the same label but a different type is defined, this method will return ``None``.
def get[T](k: AttributeKey[T]): Option[T]
/** Adds the mapping ``k -> value`` to this map, replacing any existing mapping for ``k``.
* Any mappings for keys with the same label but different types are unaffected. */
def put[T](k: AttributeKey[T], value: T): AttributeMap
}
Now that there’s a definition of what build state is, there needs to be a way to
dynamically construct it. In sbt, this is done through the Setting[_] sequence.
Settings Architecture
normalizedName := normalize(name.value)
245
Figure 4: image
246
Here, a Setting[_] is constructed that understands it depends on the value in
the name AttributeKey. Its initialize block first grabs the value of the name key,
then runs the function normalize on it to compute its value.
This represents the core mechanism of how to construct sbt’s build state. Con-
ceptually, at some point we have a graph of dependencies and initialization
functions which we can use to construct the first build state. Once this is com-
pleted, we can then start to process user requests.
Task Architecture
The next layer in sbt is around these user request, or tasks. When a user
configures a build, they are defining a set of repeatable tasks that they can
run on their project. Things like compile or test. These tasks also have a
dependency graph, where e.g. the test task requires that compile has run
before it can successfully execute.
Sbt’s defines a class Task[T]. The T type parameter represents the type of data
returned by a task. Remember the tenets of sbt? “All things have types” and
“Dependencies are explicit” both hold true for tasks. Sbt promotes a style of
task dependencies that is closer to functional programming: Return data for
your users rather than using shared mutable state.
Most build tools communciate over the filesystem, and indeed sbt, by necessity,
does some of this. However, for stable parallelization it is far better to keep
tasks isolated on the filesystem and communicate directly through types.
Similarly to how a Setting[_] stores both dependencies and an initialization
function, a Task[_] stores both its Task[_]dependencies and its behavior (a
function).
TODO - More on Task[_]
TODO - Transition into InputTask[_], rehash Command
TODO - Tansition into Scope.
Settings Core
This page describes the core settings engine a bit. This may be useful for using it
outside of sbt. It may also be useful for understanding how sbt works internally.
The documentation is comprised of two parts. The first part shows an example
settings system built on top of the settings engine. The second part comments
on how sbt’s settings system is built on top of the settings engine. This may help
illuminate what exactly the core settings engine provides and what is needed to
build something like the sbt settings system.
247
Example
Setting up To run this example, first create a new project with the following
build.sbt file:
resolvers += sbtResolver.value
Example Settings System The first part of the example defines the custom
settings system. There are three main parts:
There is also a fourth, but its usage is likely to be specific to sbt at this time.
The example uses a trivial implementation for this part.
SettingsExample.scala
import sbt._
248
}
// These three functions + a scope (here, Scope) are sufficient for defining our setting
}
Example Usage This part shows how to use the system we just defined. The
end result is a Settings[Scope] value. This type is basically a mapping Scope
-> AttributeKey[T] -> Option[T]. See the Settings API documentation for
details. SettingsUsage.scala:
import sbt._
import SettingsExample._
import Types._
object SettingsUsage {
val b4 = ScopedKey(Scope(4), b)
249
// This can be split into multiple steps to access intermediate results if desired.
// The 'inspect' command operates on the output of 'compile', for example.
val applied: Settings[Scope] = make(mySettings)(delegates, scopeLocal, showFullKey)
// Show results.
for(i <- 0 to 5; k <- Seq(a, b)) {
println( k.label + i + " = " + applied.get( Scope(i), k) )
}
}
a0 = None
b0 = None
a1 = None
b1 = None
a2 = None
b2 = None
a3 = Some(3)
b3 = None
a4 = Some(3)
b4 = Some(9)
a5 = Some(4)
b5 = Some(9)
• For the None results, we never defined the value and there was no value
to delegate to.
• For a3, we explicitly defined it to be 3.
• a4 wasn’t defined, so it delegates to a3 according to our delegates function.
• b4 gets the value for a4 (which delegates to a3, so it is 3) and multiplies
by 3
• a5 is defined as the previous value of a5 + 1 and since no previous value
of a5 was defined, it delegates to a4, resulting in 3+1=4.
• b5 isn’t defined explicitly, so it delegates to b4 and is therefore equal to 9
as well
Scopes sbt defines a more complicated scope than the one shown here for the
standard usage of settings in a build. This scope has four components: the
project axis, the configuration axis, the task axis, and the extra axis. Each
component may be Global (no specific value), This (current context), or Se-
lect (containing a specific value). sbt resolves This_ to either Global or Select
depending on the context.
250
For example, in a project, a This project axis becomes a Select referring to the
defining project. All other axes that are This are translated to Global. Functions
like inConfig and inTask transform This into a Select for a specific value. For
example, inConfig(Compile)(someSettings) translates the configuration axis
for all settings in someSettings to be Select(Compile) if the axis value is This.
So, from the example and from sbt’s scopes, you can see that the core settings
engine does not impose much on the structure of a scope. All it requires is a
delegates function Scope => Seq[Scope] and a display function. You can
choose a scope type that makes sense for your situation.
Constructing settings The app, value, update, and related methods are the
core methods for constructing settings. This example obviously looks rather dif-
ferent from sbt’s interface because these methods are not typically used directly,
but are wrapped in a higher-level abstraction.
With the core settings engine, you work with HLists to access other settings.
In sbt’s higher-level system, there are wrappers around HList for TupleN and
FunctionN for N = 1-9 (except Tuple1 isn’t actually used). When working with
arbitrary arity, it is useful to make these wrappers at the highest level possible.
This is because once wrappers are defined, code must be duplicated for every
N. By making the wrappers at the top-level, this requires only one level of
duplication.
Additionally, sbt uniformly integrates its task engine into the settings system.
The underlying settings engine has no notion of tasks. This is why sbt uses a
SettingKey type and a TaskKey type. Methods on an underlying TaskKey[T]
are basically translated to operating on an underlying SettingKey[Task[T]]
(and they both wrap an underlying AttributeKey).
For example, a := 3 for a SettingKey a will very roughly translate to setting(a,
value(3)). For a TaskKey a, it will roughly translate to setting(a, value(
task { 3 } ) ). See main/Structure.scala for details.
Setting Initialization
This page outlines the mechanisms by which sbt loads settings for a particular
build, including the hooks where users can control the ordering of everything.
251
As stated elsewhere, sbt constructs its initialization graph and task graph via
Setting[_] objects. A setting is something which can take the values stored
at other Keys in the build state, and generates a new value for a particular
build key. Sbt converts all registered Setting[_] objects into a giant linear
sequence and compiles them into the a task graph. This task graph is then used
to execute your build.
All of sbt’s loading semantics are contained within the Load.scala file. It is
approximately the following:
Figure 5: image
The blue circles represent actions happening when sbt loads a project. We can
see that sbt performs the following actions in load:
252
b. Load/Compile the project (project/*.scala)
Each of these loads defines several sequences of settings. The diagram shows
the two most important:
or in a build.sbt file :
Controlling Initialization
The order which sbt uses to load settings is configurable at a project level.
This means that we can’t control the order of settings added to Build/Global
namespace, but we can control how each project loads, e.g. plugins and .sbt
files. To do so, use the AddSettings class :
253
import sbt._
import Keys._
import AddSettings._
The AddSettings object provides the following “groups” of settings you can use
for ordering:
• autoPlugins All the ordered settings of plugins after they’ve gone through
dependency resolution
• buildScalaFiles The full sequence of settings defined directly in
project/*.scala builds.
• sbtFiles(*) Specifies the exact setting DSL files to include (files must
use the .sbt file format)
• userSettings All the settings defined in the user directory ~/.sbt/<version>/.
• defaultSbtFiles Include all local *.sbt file settings.
For example, let’s see what happens if we move the build.sbt files before the
buildScalaFile.
Let’s create an example project the following defintiion. project/build.scala
:
254
object MyTestBuild extends Build {
val testProject = project.in(file(".")).settingSets(autoPlugins, defaultSbtFiles, buildSca
version := scalaBinaryVersion.value match {
case "2.10" => "1.0-SNAPSHOT"
case v => "1.0-for-${v}-SNAPSHOT"
}
)
}
This build defines a version string which appends the scala version if the current
scala version is not the in the 2.10.x series. Now, when issuing a release we
want to lock down the version. Most tools assume this can happen by writing
a version.sbt file. version.sbt :
version := "1.0.0"
However, when we load this new build, we find that the version in version.sbt
has been overriden by the one defined in project/Build.scala because of the
order we defined for settings, so the new version.sbt file has no effect.
Build Loaders
Build loaders are the means by which sbt resolves, builds, and transforms build
definitions. Each aspect of loading may be customized for special applications.
Customizations are specified by overriding the buildLoaders methods of your
build definition’s Build object. These customizations apply to external projects
loaded by the build, but not the (already loaded) Build in which they are defined.
Also documented on this page is how to manipulate inter-project dependencies
from a setting.
Custom Resolver
The resolver should return None if it cannot handle the URI or Some containing
a function that will retrieve the build. The ResolveInfo provides a staging
directory that can be used or the resolver can determine its own target directory.
255
Whichever is used, it should be returned by the loading function. A resolver is
registered by passing it to BuildLoader.resolve and overriding Build.buildLoaders
with the result:
...
object Demo extends Build {
...
override def buildLoaders =
BuildLoader.resolve(demoResolver) ::
Nil
• ResolveInfo
• BuildLoader
Full Example
import sbt._
import Keys._
256
else
{
// Use a subdirectory of the staging directory for the new local build.
// The subdirectory name is derived from a hash of the URI,
// and so identical URIs will resolve to the same directory (as desired).
val base = RetrieveUnit.temporary(info.staging, info.uri)
// Construct a sample project on the fly with the name specified in the URI.
def resolveDemo(base: File, ssp: String): File =
{
// Only create the project if it hasn't already been created.
if(!base.exists)
IO.write(base / "build.sbt", template.format(ssp))
base
}
version := "1.0"
"""
}
Custom Builder
A builder returns None if it does not want to handle the build identified by
the BuildInfo. Otherwise, it provides a function that will load the build when
evaluated. Register a builder by passing it to BuildLoader.build and overriding
Build.buildLoaders with the result:
...
object Demo extends Build {
...
override def buildLoaders =
BuildLoader.build(demoBuilder) ::
257
Nil
• BuildInfo
• BuildLoader
• BuildUnit
val n = Project.normalizeProjectID(model.getName)
val base = Option(model.getProjectDirectory) getOrElse info.base
val root = Project(n, base) settings( pomSettings(model) : _*)
val build = new Build { override def projects = Seq(root) }
val loader = this.getClass.getClassLoader
val definitions = new LoadedDefinitions(info.base, Nil, loader, build :: Nil, Nil)
val plugins = new LoadedPlugins(info.base / "project", Nil, loader, Nil, Nil)
258
new BuildUnit(info.uri, info.base, definitions, plugins)
}
Custom Transformer
...
object Demo extends Build {
...
override def buildLoaders =
BuildLoader.transform(demoTransformer) ::
Nil
• TransformInfo
• BuildLoader
• BuildUnit
259
The BuildDependencies type
buildDependencies in Global := {
val deps = (buildDependencies in Global).value
val oldURI = uri("...") // the URI to replace
val newURI = uri("...") // the URI replacing oldURI
def substitute(dep: ClasspathDep[ProjectRef]): ClasspathDep[ProjectRef] =
if(dep.project.build == oldURI)
ResolvedClasspathDependency(ProjectRef(newURI, dep.project.project), dep.configuration
else
dep
val newcp =
for( (proj, deps) <- deps.cp) yield
(proj, deps map substitute)
new BuildDependencies(newcp, deps.aggregate)
}
There are several components of sbt that may be used to create a command
line application. The launcher and the command system are the two main ones
illustrated here.
260
As described on the launcher page, a launched application implements the xs-
bti.AppMain interface and defines a brief configuration file that users pass to
the launcher to run the application. To use the command system, an applica-
tion sets up a State instance that provides command implementations and the
initial commands to run. A minimal hello world example is given below.
1. build.sbt
2. Main.scala
3. hello.build.properties
Like for sbt itself, you can specify commands from the command line (batch
mode) or run them at an prompt (interactive mode).
Build Definition: build.sbt The build.sbt file should define the standard
settings: name, version, and organization. To use the sbt command system, a
dependency on the command module is needed. To use the task system, add a
dependency on the task-system module as well.
organization := "org.example"
name := "hello"
version := "0.1-SNAPSHOT"
1. Provide command definitions. These are the commands that are available
for users to run.
261
2. Define initial commands. These are the commands that are initially sched-
uled to run. For example, an application will typically add anything speci-
fied on the command line (what sbt calls batch mode) and if no commands
are defined, enter interactive mode by running the ‘shell’ command.
3. Set up logging. The default setup in the example rotates the log file after
each user interaction and sends brief logging to the console and verbose
logging to the log file.
package org.example
import sbt._
import java.io.{File, PrintWriter}
/** Sets up the application by constructing an initial State instance with the supported
* and initial commands to run. See the State API documentation for details. */
def initialState(configuration: xsbti.AppConfiguration): State =
{
val commandDefinitions = hello +: BasicCommands.allBasicCommands
val commandsToRun = Hello +: "iflast shell" +: configuration.arguments.map(_.trim)
State( configuration, commandDefinitions, Set.empty, None, commandsToRun, State.newHis
AttributeMap.empty, initialGlobalLogging, State.Continue )
}
/** Configures logging to log to a temporary backing file as well as to the console.
* An application would need to do more here to customize the logging level and
* provide access to the backing file (like sbt's last command and logLevel setting).*/
def initialGlobalLogging: GlobalLogging =
GlobalLogging.initial(MainLogging.globalDefault _, File.createTempFile("hello", "log")
}
262
Launcher configuration file: hello.build.properties The launcher
needs a configuration file in order to retrieve and run an application.
hello.build.properties:
[scala]
version: 2.9.1
[app]
org: org.example
name: hello
version: 0.1-SNAPSHOT
class: org.example.Main
components: xsbti
cross-versioned: true
[repositories]
local
maven-central
typesafe-ivy-releases: http://repo.typesafe.com/typesafe/ivy-releases/, [organization]/[mod
Nightly Builds
263
How to…
See Detailed Table of Contents for the list of all the how-tos.
Classpaths
The classpathTypes setting controls the types of managed artifacts that are
included on the classpath by default. To add a new type, such as mar,
classpathTypes += "mar"
See the default types included by running show classpathTypes at the sbt
prompt.
The dependencyClasspath task scoped to Compile provides the classpath to use
for compilation. Its type is Seq[Attributed[File]], which means that each
entry carries additional metadata. The files method provides just the raw
Seq[File] for the classpath. For example, to use the files for the compilation
classpath in another task, :
example := {
val cp: Seq[File] = (dependencyClasspath in Compile).value.files
...
}
Note: This classpath does not include the class directory, which
may be necessary for compilation in some situations.
264
example := {
val cp: Seq[File] = (fullClasspath in Runtime).value.files
...
}
Get the test classpath, including the project’s compiled test classes
example := {
val cp: Seq[File] = (fullClasspath in Test).value.files
...
}
exportJars := true
This will use the result of packageBin on the classpath instead of the class
directory.
265
Get all managed jars for a configuration
The result of the update task has type UpdateReport, which contains the results
of dependency resolution. This can be used to extract the files for specific types
of artifacts in a specific configuration. For example, to get the jars and zips of
dependencies in the Compile configuration, :
example := {
val artifactTypes = Set("jar", "zip")
val files: Seq[File] =
Classpaths.managedJars(Compile, artifactTypes, update.value)
...
}
A classpath has type Seq[Attributed[File]], which means that each entry car-
ries additional metadata. The files method provides just the raw Seq[File]
for the classpath. For example, :
A classpath has type Seq[Attributed[File]], which means that each entry car-
ries additional metadata. This metadata is in the form of an AttributeMap. Use-
ful keys for entries in the map are artifact.key, module.key, and analysis.
For example,
Note: Entries may not have some or all metadata. Only entries from
source dependencies, such as internal projects, have an incremental
compilation Analysis. Only entries for managed dependencies have
an Artifact and ModuleID.
266
Customizing paths
This page describes how to modify the default source, resource, and library
directories and what files get included from them.
The directory that contains the main Scala sources is by default src/main/scala.
For test Scala sources, it is src/test/scala. To change this, modify
scalaSource in the Compile (for main sources) or Test (for test sources). For
example,
Note: The Scala source directory can be the same as the Java source
directory.
The directory that contains the main Java sources is by default src/main/java.
For test Java sources, it is src/test/java. To change this, modify javaSource
in the Compile (for main sources) or Test (for test sources).
For example,
Note: The Scala source directory can be the same as the Java source
directory.
267
Change the default (unmanaged) library directory
When set for Compile, Runtime, or Test, unmanagedBase is the directory con-
taining libraries for that configuration, overriding the default. For example,
the following declares lib/main/ to contain jars only for Compile and not for
running or testing: :
By default, sbt includes .scala files from the project’s base directory as main
source files. To disable this, configure sourcesInBase:
sourcesInBase := false
268
Add an additional resource directory
To have different filters for main and test libraries, configure Compile and Test
separately:
269
excludeFilter in unmanagedSources := HiddenFileFilter || "*impl*"
To have different filters for main and test libraries, configure Compile and Test
separately:
Note: By default, sbt includes all files that are not hidden.
To have different filters for main and test libraries, configure Compile and Test
separately:
Note: By default, sbt includes jars, zips, and native dynamic li-
braries, excluding hidden files.
Generating files
sbt provides standard hooks for adding source and resource generation tasks.
Generate sources
270
def makeSomeSources(base: File): Seq[File]
> run
[info] Running Test
Hi
Change Compile to Test to make it a test source. For efficiency, you would only
want to generate sources when necessary and not every run.
By default, generated sources are not included in the packaged source artifact.
To do so, add them as you would other mappings. See Adding files to a package.
A source generator can return both Java and Scala sources mixed together in
the same sequence. They will be distinguished by their extension later.
271
Generate resources
Change Compile to Test to make it a test resource. Normally, you would only
want to generate resources when necessary and not every run.
By default, generated resources are not included in the packaged source artifact.
To do so, add them as you would other mappings. See Adding files to a package.
The help command is used to show available commands and search the help for
commands, tasks, or settings. If run without arguments, help lists the available
commands.
272
> help
The tasks command, without arguments, lists the most commonly used tasks.
It can take a regular expression to search task names and descriptions. The
verbosity can be increased to show or search less commonly used tasks. See
help tasks for details.
The settings command, without arguments, lists the most commonly used set-
tings. It can take a regular expression to search setting names and descriptions.
The verbosity can be increased to show or search less commonly used settings.
See help settings for details.
273
[info] test:definedSbtPlugins
[info] test:printWarnings
[info] test:discoveredMainClasses
[info] test:definedTests
[info] test:exportedProducts
[info] test:products
...
For each task, inspect tree show the type of the value generated by the task.
For a setting, the toString of the setting is displayed. See the Inspecting
Settings page for details on the inspect command.
While the help, settings, and tasks commands display a description of a task,
the inspect command also shows the type of a setting or task and the value of
a setting. For example:
274
> inspect scalaVersion
[info] Setting: java.lang.String = 2.9.2
[info] Description:
[info] The version of Scala used for building.
...
The projects command displays the currently loaded projects. The projects
are grouped by their enclosing build and the current project is indicated by an
asterisk. For example,
> projects
[info] In file:/home/user/demo/
[info] * parent
[info] sub
[info] In file:/home/user/dep/
[info] sample
275
Show the current session (temporary) settings
session list displays the settings that have been added at the command line
for the current project. For example,
session list-all displays the settings added for all projects. For details, see
help session.
> about
[info] This is sbt 0.12.0
[info] The current project is {file:~/code/sbt.github.com/}default
[info] The current project is built against Scala 2.9.2
[info] Available Plugins: com.jsuereth.ghpages.GhPages, com.jsuereth.git.GitPlugin, com.jsuere
[info] sbt, sbt plugins, and build definitions are using Scala 2.9.2
The inspect command shows the value of a setting as part of its output, but
the show command is dedicated to this job. It shows the output of the setting
provided as an argument. For example,
276
The show command will execute the task provided as an argument and then
print the result. Note that this is different from the behavior of the inspect
command (described in other sections), which does not execute a task and thus
can only display its type and not its generated value.
sbt detects the classes with public, static main methods for use by the run
method and to tab-complete the runMain method. The discoveredMainClasses
task does this discovery and provides as its result the list of class names. For
example, the following shows the main classes discovered in the main sources:
277
Interactive mode
Use tab completion
> tes<TAB>
> test
> test<TAB>
testFrameworks testListeners testLoader testOnly testOptions test:
Now, there is more than one possibility for the next character, so sbt prints the
available options. We will select testOnly and get more suggestions by entering
the rest of the command and hitting tab twice:
> testOnly<TAB><TAB>
-- sbt.DagSpecification sbt.EmptyRelationTest sbt.KeyTest sbt.Relati
The first tab inserts an unambiguous space and the second suggests names of
tests to run. The suggestion of -- is for the separator between test names and
options provided to the test framework. The other suggestions are names of
test classes for one of sbt’s modules. Test name suggestions require tests to be
compiled first. If tests have been added, renamed, or removed since the last
test compilation, the completions will be out of date until another successful
compile.
Some commands have different levels of completion. Hitting tab multiple times
increases the verbosity of completions. (Presently, this feature is only used by
the set command.)
278
Modify the default JLine keybindings
JLine, used by both Scala and sbt, uses a configuration file for many of its
keybindings. The location of this file can be changed with the system property
jline.keybindings. The default keybindings file is included in the sbt launcher
and may be used as a starting point for customization.
By default, sbt only displays > to prompt for a command. This can be changed
through the shellPrompt setting, which has type State => String. State
contains all state for sbt and thus provides access to all build information for
use in the prompt string.
Examples:
// set the prompt (for this build) to include the project id.
shellPrompt in ThisBuild := { state => Project.extract(state).currentRef.project + "> " }
// set the prompt (for the current project) to include the username
shellPrompt := { state => System.getProperty("user.name") + "> " }
Use history
Interactive mode remembers history even if you exit sbt and restart it. The
simplest way to access history is to press the up arrow key to cycle through
previously entered commands. Use Ctrl+r to incrementally search history back-
wards. The following commands are supported:
By default, interactive history is stored in the target/ directory for the current
project (but is not removed by a clean). History is thus separate for each
subproject. The location can be changed with the historyPath setting, which
279
has type Option[File]. For example, history can be stored in the root directory
for the project instead of the output directory:
The history path needs to be set for each project, since sbt will use the value of
historyPath for the current project (as selected by the project command).
The previous section describes how to configure the location of the history file.
This setting can be used to share the interactive history among all projects in a
build instead of using a different history for each project. The way this is done
is to set historyPath to be the same file, such as a file in the root project’s
target/ directory:
historyPath :=
Some( (target in LocalRootProject).value / ".history")
The in LocalRootProject part means to get the output directory for the root
project for the build.
If, for whatever reason, you want to disable history, set historyPath to None
in each project it should be disabled in:
This runs clean and then compile before entering the interactive prompt. If
either clean or compile fails, sbt will exit without going to the prompt. To enter
the prompt whether or not these initial commands succeed, prepend -shell,
which means to run shell if any command fails. For example,
280
Configure and use logging
When a command is run, more detailed logging output is sent to a file than
to the screen (by default). This output can be recalled for the command just
executed by running last.
For example, the output of run when the sources are uptodate is:
> run
[info] Running A
Hi!
[success] Total time: 0 s, completed Feb 25, 2012 1:00:00 PM
> last
[debug] Running task... Cancelable: false, max worker threads: 4, check cycles: false
[debug]
[debug] Initial source changes:
[debug] removed:Set()
[debug] added: Set()
[debug] modified: Set()
[debug] Removed products: Set()
[debug] Modified external sources: Set()
[debug] Modified binary dependencies: Set()
[debug] Initial directly invalidated sources: Set()
[debug]
[debug] Sources indirectly invalidated by:
[debug] product: Set()
[debug] binary dep: Set()
[debug] external source: Set()
[debug] Initially invalidated: Set()
[debug] Copy resource mappings:
[debug]
[info] Running A
[debug] Starting sandboxed run...
[debug] Waiting for threads to exit or System.exit to be called.
[debug] Classpath:
[debug] /tmp/e/target/scala-2.9.2/classes
[debug] /tmp/e/.sbt/0.12.0/boot/scala-2.9.2/lib/scala-library.jar
[debug] Waiting for thread runMain to exit
[debug] Thread runMain exited.
[debug] Interrupting remaining threads (should be all daemons).
281
[debug] Sandboxed run complete..
[debug] Exited with code 0
[success] Total time: 0 s, completed Jan 1, 2012 1:00:00 PM
Configuration of the logging level for the console and for the backing file are
described in following sections.
When a task is run, more detailed logging output is sent to a file than to the
screen (by default). This output can be recalled for a specific task by running
last <task>. For example, the first time compile is run, output might look
like:
> compile
[info] Updating {file:/.../demo/}example...
[info] Resolving org.scala-lang#scala-library;2.9.2 ...
[info] Done updating.
[info] Compiling 1 Scala source to .../demo/target/scala-2.9.2/classes...
[success] Total time: 0 s, completed Jun 1, 2012 1:11:11 PM
The output indicates that both dependency resolution and compilation were
performed. The detailed output of each of these may be recalled individually.
For example,
and:
282
Show warnings from the previous compilation
The Scala compiler does not print the full details of warnings by default. Com-
piling code that uses the deprecated error method from Predef might generate
the following output:
> compile
[info] Compiling 1 Scala source to <...>/classes...
[warn] there were 1 deprecation warnings; re-run with -deprecation for details
[warn] one warning found
> printWarnings
[warn] A.scala:2: method error in object Predef is deprecated: Use sys.error(message) instead
[warn] def x = error("Failed.")
[warn] ^
The quickest way to change logging levels is by using the error, warn, info, or
debug commands. These set the default logging level for commands and tasks.
For example,
> warn
will by default show only warnings and errors. To set the logging level before
any commands are executed on startup, use -- before the logging level. For
example,
$ sbt --warn
> compile
[warn] there were 2 feature warning(s); re-run with -feature for details
[warn] one warning found
[success] Total time: 4 s, completed ...
>
283
Change the logging level for a specific task, configuration, or project
The amount of logging is controlled by the logLevel setting, which takes values
from the Level enumeration. Valid values are Error, Warn, Info, and Debug
in order of increasing verbosity. The logging level may be configured globally,
as described in the previous section, or it may be applied to a specific project,
configuration, or task. For example, to change the logging level for compilation
to only show warnings and errors:
A common scenario is that after running a task, you notice that you need
more information than was shown by default. A logLevel based solution typ-
ically requires changing the logging level and running a task again. However,
there are two cases where this is unnecessary. First, warnings from a previous
compilation may be displayed using printWarnings for the main sources or
test:printWarnings for test sources. Second, output from the previous execu-
tion is available either for a single task or for in its entirety. See the section on
printWarnings and the sections on previous output.
By default, sbt hides the stack trace of most exceptions thrown during execution.
It prints a message that indicates how to display the exception. However, you
may want to show more of stack traces by default.
The setting to configure is traceLevel, which is a setting with an Int value.
When traceLevel is set to a negative value, no stack traces are shown. When
it is zero, the stack trace is displayed up to the first sbt stack frame. When
positive, the stack trace is shown up to that many stack frames.
For example, the following configures sbt to show stack traces up to the first
sbt frame:
The every part means to override the setting in all scopes. To change the trace
printing behavior for a single project, configuration, or task, scope traceLevel
appropriately:
284
Print the output of tests immediately instead of buffering
By default, sbt buffers the logging output of a test until the whole class finishes.
This is so that output does not get mixed up when executing in parallel. To
disable buffering, set the logBuffered setting to false:
logBuffered := false
The setting extraLoggers can be used to add custom loggers. A custom logger
should implement [AbstractLogger]. extraLoggers is a function ScopedKey[_]
=> Seq[AbstractLogger]. This means that it can provide different logging
based on the task that requests the logger.
extraLoggers := {
val currentFunction = extraLoggers.value
(key: ScopedKey[_]) => {
myCustomLogger(key) +: currentFunction(key)
}
}
Here, we take the current function currentFunction for the setting and provide
a new function. The new function prepends our custom logger to the ones
provided by the old function.
The special task streams provides per-task logging and I/O via a Streams in-
stance. To log, a task uses the log member from the streams task:
myTask := {
val log = streams.value.log
log.warn("A warning.")
}
Project metadata
A project should define name and version. These will be used in various parts of
the build, such as the names of generated artifacts. Projects that are published
to a repository should also override organization.
285
name := "Your project name"
version := "1.0"
organization := "org.example"
By convention, this is a reverse domain name that you own, typically one specific
to your project. It is used as a namespace for projects.
A full/formal name can be defined in the organizationName setting. This is
used in the generated pom.xml. If the organization has a web site, it may be
set in the organizationHomepage setting. For example:
organizationHomepage := Some(url("http://example.org"))
homepage := Some(url("http://scala-sbt.org"))
startYear := Some(2008)
Configure packaging
286
exportJars := true
The jar will be used by run, test, console, and other tasks that use the full
classpath.
By default, sbt constructs a manifest for the binary package from settings such
as organization and mainClass. Additional attributes may be added to the
packageOptions setting scoped by the configuration and package task.
Main attributes may be added with Package.ManifestAttributes. There are
two variants of this method, once that accepts repeated arguments that map
an attribute of type java.util.jar.Attributes.Name to a String value and
other that maps attribute names (type String) to the String value.
For example,
The artifactName setting controls the name of generated packages. See the
Artifacts page for details.
287
Modify the contents of the package
Note that mappings is scoped by the configuration and the specific package
task. For example, the mappings for the test source package are defined by the
mappings in (Test, packageSrc) task.
Running commands
> ~ ;clean;compile
The < command reads commands from the files provided to it as arguments.
Run help < at the sbt prompt for details.
288
Define an alias for a command or task
The alias command defines, removes, and displays aliases for commands. Run
help alias at the sbt prompt for details.
Example usage:
The eval command compiles and runs the Scala expression passed to it as an
argument. The result is printed along with its type. For example,
The scalaVersion configures the version of Scala used for compilation. By de-
fault, sbt also adds a dependency on the Scala library with this version. See the
next section for how to disable this automatic dependency. If the Scala version
is not specified, the version sbt was built against is used. It is recommended to
explicitly specify the version of Scala.
For example, to set the Scala version to “2.11.1”,
scalaVersion := "2.11.1"
289
Disable the automatic dependency on the Scala library
sbt adds a dependency on the Scala standard library by default. To disable this
behavior, set the autoScalaLibrary setting to false.
autoScalaLibrary := false
To set the Scala version in all scopes to a specific value, use the ++ command.
For example, to temporarily use Scala 2.10.4, run:
> ++ 2.10.4
Defining the scalaHome setting with the path to the Scala home directory will
use that Scala installation. sbt still requires scalaVersion to be set when a
local Scala version is used. For example,
scalaVersion := "2.10.0-local"
scalaHome := Some(file("/path/to/scala/home/"))
The consoleQuick action retrieves dependencies and puts them on the classpath
of the Scala REPL. The project’s sources are not compiled, but sources of any
source dependencies are compiled. To enter the REPL with test dependencies
on the classpath but without compiling test sources, run test:consoleQuick.
This will force compilation of main sources.
The console action retrieves dependencies and compiles sources and puts them
on the classpath of the Scala REPL. To enter the REPL with test dependencies
and compiled test sources on the classpath, run test:console.
290
Enter the Scala REPL with plugins and the build definition on the
classpath
> consoleProject
Define the initial commands evaluated when entering the Scala REPL
291
Use the Scala REPL from project code
sbt runs tests in the same JVM as sbt itself and Scala classes are not in the
same class loader as the application classes. This is also the case in console
and when run is not forked. Therefore, when using the Scala interpreter, it is
important to set it up properly to avoid an error message like:
The key is to initialize the Settings for the interpreter using embeddedDefaults.
For example:
sbt will run javadoc if there are only Java sources in the project. If there are
any Scala sources, sbt will run scaladoc. (This situation results from scaladoc
not processing Javadoc comments in Java sources nor linking to Javadoc.)
292
scalacOptions in (Compile,doc) := Seq("-groups", "-implicits")
Set autoAPIMappings := true for sbt to tell scaladoc where it can find the
API documentation for managed dependencies. This requires that dependencies
have this information in its metadata and you are using scaladoc for Scala
2.10.2 or later.
293
autoAPIMappings, so this manual configuration is typically done for unman-
aged dependencies. The File key is the location of the dependency as passed
to the classpath. The URL value is the base URL of the API documentation for
the dependency. For example,
apiMappings += (
(unmanagedBase.value / "a-library.jar") ->
url("http://example.org/api/")
)
Set apiURL to define the base URL for the Scaladocs for your library. This will
enable clients of your library to automatically link against the API documenta-
tion using autoAPIMappings. (This only works for Scala 2.10.2 and later.) For
example,
apiURL := Some(url("http://example.org/api/"))
This information will get included in a property of the published pom.xml, where
it can be automatically consumed by sbt.
Triggered execution
You can make a command run when certain files change by prefixing the com-
mand with ~. Monitoring is terminated when enter is pressed. This triggered
execution is configured by the watch setting, but typically the basic settings
watchSources and pollInterval are modified as described in later sections.
The original use-case for triggered execution was continuous compilation:
> ~ test:compile
> ~ compile
You can use the triggered execution feature to run any command or task, how-
ever. The following will poll for changes to your source code (main or test) and
run testOnly for the specified test.
294
Run multiple commands when sources change
> ~ ;a ;b
• watchSources defines the files for a single project that are monitored
for changes. By default, a project watches resources and Scala and Java
sources.
• watchTransitiveSources then combines the watchSources for the cur-
rent project and all execution and classpath dependencies (see .scala build
definition for details on inter-project dependencies).
pollInterval := 1000 // in ms
Examples
This section of the documentation has example sbt build definitions and code.
Contributions are welcome!
You may want to read the Getting Started Guide as a foundation for under-
standing the examples.
295
.sbt build examples
Listed here are some examples of settings (each setting is independent). See
.sbt build definition for details.
Note that blank lines are used to separate individual settings. Avoid using
blank lines within a single multiline expression. As explained in .sbt build
definition, each setting is otherwise a normal Scala expression with expected
type sbt.SettingDefinition.
version := "1.0"
organization := "org.myproject"
// increase the time between polling for file changes when using continuous execution
296
pollInterval := 1000
// append several options to the list of options passed to the Java compiler
javacOptions ++= Seq("-source", "1.5", "-target", "1.5")
// define the statements initially evaluated when entering 'console', 'consoleQuick', or 'co
initialCommands := """
import System.{currentTimeMillis => now}
def time[T](f: => T): T = {
val start = now
try { f } finally { println("Elapsed: " + (now - start)/1000.0 + " s") }
}
"""
// set the initial commands when entering 'console' or 'consoleQuick', but not 'consoleProje
initialCommands in console := "import myproject._"
297
// set the prompt (for this build) to include the project id.
shellPrompt in ThisBuild := { state => Project.extract(state).currentRef.project + "> " }
// set the prompt (for the current project) to include the username
shellPrompt := { state => System.getProperty("user.name") + "> " }
// set the location of the JDK to use for compiling Java code.
// if 'fork' is true, this is used for 'run' as well
javaHome := Some(file("/usr/lib/jvm/sun-jdk-1.6"))
// Use Scala from a directory on the filesystem instead of retrieving from a repository
scalaHome := Some(file("/home/user/scala/trunk/"))
298
aggregate in clean := false
// only show warnings and errors on the screen for all tasks (the default is Info)
// individual tasks can then be more verbose using the previous setting
logLevel := Level.Warn
299
// Directly specify credentials for publishing.
credentials += Credentials("Sonatype Nexus Repository Manager", "nexus.scala-tools.org", "ad
// Exclude transitive dependencies, e.g., include log4j without including logging via jdmk,
libraryDependencies +=
"log4j" % "log4j" % "1.2.15" excludeAll(
ExclusionRule(organization = "com.sun.jdmk"),
ExclusionRule(organization = "com.sun.jmx"),
ExclusionRule(organization = "javax.jms")
)
import sbt._
import Keys._
object BuildSettings {
val buildOrganization = "odp"
val buildVersion = "2.0.29"
val buildScalaVersion = "2.9.0-1"
300
val buildShellPrompt = {
(state: State) => {
val currProject = Project.extract (state).currentProject.id
"%s:%s:%s> ".format (
currProject, currBranch, BuildSettings.buildVersion
)
}
}
}
object Resolvers {
val sunrepo = "Sun Maven2 Repo" at "http://download.java.net/maven/2"
val sunrepoGF = "Sun GF Maven2 Repo" at "http://download.java.net/maven/glassfish"
val oraclerepo = "Oracle Maven2 Repo" at "http://download.oracle.com/maven"
object Dependencies {
val logbackVer = "0.9.16"
val grizzlyVer = "1.9.19"
301
// Sub-project specific dependencies
val commonDeps = Seq (
logbackcore,
logbackclassic,
jacksonjson,
scalatest
)
302
lazy val pricing_service = Project (
"pricing-service",
file ("cdap2-pricing-service"),
settings = buildSettings
) dependsOn (pricing, server)
External Builds
This is an example .scala build definition that demonstrates using Ivy configu-
rations to group dependencies.
The utils module provides utilities for other modules. It uses Ivy configurations
to group dependencies so that a dependent project doesn’t have to pull in all
dependencies if it only uses a subset of functionality. This can be an alternative
to having multiple utilities modules (and consequently, multiple utilities jars).
In this example, consider a utils project that provides utilities related to both
Scalate and Saxon. It therefore needs both Scalate and Saxon on the compi-
lation classpath and a project that uses all of the functionality of ‘utils’ will
need these dependencies as well. However, project a only needs the utilities
related to Scalate, so it doesn’t need Saxon. By depending only on the scalate
configuration of utils, it only gets the Scalate-related dependencies.
import sbt._
import Keys._
303
/********** Projects ************/
304
Advanced command example
This is an advanced example showing some of the power of the new settings
system. It shows how to temporarily modify all declared dependencies in the
build, regardless of where they are defined. It directly operates on the final
Seq[Setting[_]] produced from every setting involved in the build.
The modifications are applied by running canonicalize. A reload or using set
reverts the modifications, requiring canonicalize to be run again.
This particular example shows how to transform all declared dependencies on
ScalaCheck to use version 1.8. As an exercise, you might try transforming other
dependencies, the repositories used, or the scalac options used. It is possible to
add or remove settings as well.
This kind of transformation is possible directly on the settings of Project, but it
would not include settings automatically added from plugins or build.sbt files.
What this example shows is doing it unconditionally on all settings in all projects
in all builds, including external builds.
import sbt._
import Keys._
// Define the command. This takes the existing settings (including any session settings)
// and applies 'f' to each Setting[_]
def canonicalize = Command.command("canonicalize") { (state: State) =>
val extracted = Project.extract(state)
import extracted._
val transformed = session.mergeSettings map ( s => f(s) )
val newStructure = Load.reapply(transformed, structure)
Project.setProject(session, newStructure, state)
}
// Transforms a Setting[_].
def f(s: Setting[_]): Setting[_] = s.key.key match {
// transform all settings that modify libraryDependencies
case Keys.libraryDependencies.key =>
// hey scalac. T == Seq[ModuleID]
s.asInstanceOf[Setting[Seq[ModuleID]]].mapInit(mapLibraryDependencies)
// preserve other settings
case _ => s
}
// This must be idempotent because it gets applied after every transformation.
305
// That is, if the user does:
// libraryDependencies += a
// libraryDependencies += b
// then this method will be called for Seq(a) and Seq(a,b)
def mapLibraryDependencies(key: ScopedKey[Seq[ModuleID]], value: Seq[ModuleID]): Seq[Modul
value map mapSingle
Project Information
How do I get help? Please use Stack Overflow for questions. Use the sbt-dev
mailing list for comments and discussions about sbt development.
• Please state the problem or question clearly and provide enough context.
Code examples and build transcripts are often useful when appropriately
edited.
• Providing small, reproducible examples is a good way to get help quickly.
• Include relevant information such as the version of sbt and Scala being
used.
How do I report a bug? Please use the issue tracker to report confirmed
bugs. Do not use it to ask questions or to determine if something is a bug.
306
Usage
and it will display the full output from the last run of the update command.
How do I disable ansi codes in the output? Sometimes sbt doesn’t detect
that ansi codes aren’t supported and you get output that looks like:
or ansi codes are supported but you want to disable colored output. To com-
pletely disable ansi codes, set the sbt.log.format system property to false.
For example,
How can I start a Scala interpreter (REPL) with sbt project configu-
ration (dependencies, etc.)? You may run sbt console.
Build definitions
What are the :=, +=, and ++= methods? These are methods on keys used
to construct a Setting or a Task. The Getting Started Guide covers all these
methods, see .sbt build definition and more kinds of setting for example.
What is the % method? It’s used to create a ModuleID from strings, when
specifying managed dependencies. Read the Getting Started Guide about li-
brary dependencies.
307
How do I add files to a jar package? The files included in an artifact are
configured by default by a task mappings that is scoped by the relevant package
task. The mappings task returns a sequence Seq[(File,String)] of mappings
from the file to include to the path within the jar. See mapping files for details
on creating these mappings.
For example, to add generated sources to the packaged source artifact:
This takes sources from the managedSources task and relativizes them against
the managedSource base directory, falling back to a flattened mapping. If a
source generation task doesn’t write the sources to the managedSource directory,
the mapping function would have to be adjusted to try relativizing against
additional directories or something more appropriate for the generator.
How can a task avoid redoing work if the input files are unchanged?
There is basic support for only doing work when input files have changed or
when the outputs haven’t been generated yet. This support is primitive and
subject to change.
The relevant methods are two overloaded methods called FileFunction.cached.
Each requires a directory in which to store cached data. Sample usage is:
308
There are two additional arguments for the first parameter list that allow the file
tracking style to be explicitly specified. By default, the input tracking style is
FilesInfo.lastModified, based on a file’s last modified time, and the output
tracking style is FilesInfo.exists, based only on whether the file exists. The
other available style is FilesInfo.hash, which tracks a file based on a hash of
its contents. See the FilesInfo API for details.
A more advanced version of FileFunction.cached passes a data structure of
type ChangeReport describing the changes to input and output files since the
last evaluation. This version of cached also expects the set of files generated as
output to be the result of the evaluated function.
Extending sbt
This uses the main options as base options because of +=. Use := to ignore the
main options:
The example adds all of the usual compilation related settings and tasks to
samples:
samples:run
samples:runMain
samples:compile
samples:console
samples:consoleQuick
samples:scalacOptions
samples:fullClasspath
samples:package
samples:packageSrc
...
309
How do I add a test configuration? See the Additional test configurations
section of Testing.
How can I create a custom run task, in addition to run? This answer
is extracted from a mailing list discussion.
Read the Getting Started Guide up to custom settings for background.
A basic run task is created by:
As an example, consider a proguard task. This task needs the ProGuard jars
in order to run the tool. First, define and add the new configuration:
ivyConfigurations += ProguardConfig
310
Then,
Defining the intermediate classpath is optional, but it can be useful for debug-
ging or if it needs to be used by multiple tasks. It is also possible to specify
artifact types inline. This alternative proguard task would look like:
proguard := {
val artifactTypes = Set("jar")
val cp: Seq[File] =
Classpaths.managedJars(proguardConfig, artifactTypes, update.value)
// ... do something with , which includes proguard ...
}
311
Because these components are added to the ~/.sbt/boot/ directory and
~/.sbt/boot/ may be read-only, this can fail. In this case, the user has
generally intentionally set sbt up this way, so error recovery is not typically
necessary (just a short error message explaining the situation.)
How can I take action when the project is loaded or unloaded? The
single, global setting onLoad is of type State => State (see State and Actions)
and is executed once, after all projects are built and loaded. There is a similar
hook onUnload for when a project is unloaded. Project unloading typically
occurs as a result of a reload command or a set command. Because the
onLoad and onUnload hooks are global, modifying this setting typically involves
composing a new function with the previous value. The following example shows
the basic structure of defining onLoad:
312
Example of project load/unload hooks The following example maintains
a count of the number of times a project has been loaded and prints that number:
{
// the key for the current count
val key = AttributeKey[Int]("loadCount")
// the State transformer
val f = (s: State) => {
val previous = s get key getOrElse 0
println("Project load count: " + previous)
s.put(key, previous + 1)
}
onLoad in Global := {
val previous = (onLoad in Global).value
f compose previous
}
}
Errors
313
A more subtle variation of this error occurs when using scoped settings.
This setting varies between the test and compile scopes. The solution is use
the scoped setting, both as the input to the initializer, and the setting that we
update.
Dependency Management
How do I resolve a checksum error? This error occurs when the published
checksum, such as a sha1 or md5 hash, differs from the checksum computed for
a downloaded artifact, such as a jar or pom.xml. An example of such an error
is:
The invalid checksum should generally be reported to the repository owner (as
was done for the above error). In the meantime, you can temporarily disable
checking with the following setting:
314
a version of the plugin that’s compiled for 2.9.0–and it usually won’t. That’s
because it doesn’t know the dependency is a plugin.
To tell sbt that the dependency is an sbt plugin, make sure you define your
global plugins in a .sbt file in ~/.sbt/plugins/. sbt knows that files in
~/.sbt/plugins are only to be used by sbt itself, not as part of the general build
definition. If you define your plugins in a file under that directory, they won’t
foul up your cross-compilations. Any file name ending in .sbt will do, but most
people use ~/.sbt/plugins/build.sbt or ~/.sbt/plugins/plugins.sbt.
Miscellaneous
How do I use the Scala interpreter in my code? sbt runs tests in the
same JVM as sbt itself and Scala classes are not in the same class loader as the
application classes. Therefore, when using the Scala interpreter, it is important
to set it up properly to avoid an error message like:
The key is to initialize the Settings for the interpreter using embeddedDefaults.
For example:
How do I migrate from 0.7 to 0.10+? See the migration page first and
then the following questions.
315
Where has 0.7’s lib_managed gone? By default, sbt 0.13.5 loads managed
libraries from your ivy cache without copying them to a lib_managed directory.
This fixes some bugs with the previous solution and keeps your project directory
small. If you want to insulate your builds from the ivy cache being cleared, set
retrieveManaged := true and the dependencies will be copied to lib_managed
as a build-local cache (while avoiding the issues of lib_managed in 0.7.x).
This does mean that existing solutions for sharing libraries with your favoured
IDE may not work. Refer to Community Plugins page for a list of currently
available plugins for your IDE.
What are the commands I can use in 0.13.5 vs. 0.7? For a list of
commands, run help. For details on a specific command, run help <command>.
To view a list of tasks defined on the current project, run tasks. Alternatively,
see the Running page in the Getting Started Guide for descriptions of common
commands and tasks.
If in doubt start by just trying the old command as it may just work. The
built in TAB completion will also assist you, so you can just press TAB at the
beginning of a line and see what you get.
The following commands work pretty much as in 0.7 out of the box:
reload
update
compile
test
testOnly
publishLocal
exit
My tests all run really fast but some are broken that weren’t in 0.7!
Be aware that compilation and tests run in parallel by default in sbt 0.13.5. If
your test code isn’t thread-safe then you may want to change this behaviour by
adding one of the following to your build.sbt:
316
// Execute tests in the current project serially.
// Tests from other projects may still run concurrently.
parallelExecution in Test := false
What happened to the web development and Web Start support since
0.7? Web application support was split out into a plugin. See the xsbt-web-
plugin project.
For an early version of an xsbt Web Start plugin, visit the xsbt-webstart project.
meant that the B project had a classpath and execution dependency on A and A
had a configuration dependency on B. Specifically, in 0.7.x:
In 0.13.5, declare the specific type of dependency you want. Read about multi-
project builds in the Getting Started Guide for details.
</td>
317
PathFinder class
Seq[File], PathFinder class, PathFinder object
Where can I find plugins for 0.13.5? See Community Plugins for a list of
currently available plugins.
Index
This is an index of common methods, types, and values you might find in an
sbt build definition. For command names, see Running. For available plugins,
see the plugins list.
Dependency Management
318
• Initialize describes how to initialize a setting using other settings, but isn’t
bound to a particular setting yet. Combined with an initialization method
and a setting to initialize, it produces a full Setting.
• TaskKey, SettingKey, and InputKey are keys that represent a task or
setting. These are not the actual tasks, but keys that are used to refer
to them. They can be scoped to produce ScopedTask, ScopedSetting,
and ScopedInput. These form the base types that provide the Settings
methods.
• InputTask parses and tab completes user input, producing a task to run.
• Task is the type of a task. A task is an action that runs on demand. This
is in contrast to a setting, which is run once at project initialization.
Process
Build Structure
• Build is the trait implemented for a .scala build definition, which defines
project relationships and settings.
• Plugin is the trait implemented for sbt plugins.
• Project is both a trait and a companion object that declares a single
module in a build. See .scala build definition.
• Keys is an object that provides all of the built-in keys for settings and
tasks.
• State contains the full state for a build. It is mainly used by Commands
and sometimes Input Tasks. See also State and Actions.
Methods
Settings and Tasks See the Getting Started Guide for details.
• :=, +=, ++= These construct a Setting, which is the fundamental type in
the settings system.
• value This uses the value of another setting or task in the definition of
a new setting or task. This method is special (it is a macro) and cannot
be used except in the argument of one of the setting definition methods
above (:=, …) or in the standalone construction methods Def.setting and
Def.task. See more about settings for details.
319
• in specifies the Scope or part of the Scope of a setting being referenced.
See scopes.
File and IO See RichFile, PathFinder, and Paths for the full documentation.
• / When called on a single File, this is new File(x,y). For Seq[File], this
is applied for each member of the sequence..
• * and ** are methods for selecting children (*) or descendants (**) of a
File or Seq[File] that match a filter.
• |, ||, &&, &, -, and -- are methods for combining filters, which are often
used for selecting Files. See NameFilter and FileFilter. Note that methods
with these names also exist for other types, such as collections (like Seq)
and Parser (see Parsing Input).
• pair Used to construct mappings from a File to another File or to a
String. See Mapping Files.
• get forces a PathFinder (a call-by-name data structure) to a strict
Seq[File] representation. This is a common name in Scala, used by
types like Option.
Parsing These methods are used to build up Parsers from smaller Parsers.
They closely follow the names of the standard library’s parser combinators. See
Parsing Input for the full documentation. These are used for Input Tasks and
Commands.
320
• ˆˆˆ Produces a constant value when a Parser matches.
• +, * Postfix repetition methods. These are common method names in
Scala.
• map, flatMap Transforms the result of a Parser. These are common
method names in Scala.
• filter Restricts the inputs that a Parser matches on. This is a common
method name in Scala.
• - Prefix negation. Only matches the input when the original parser doesn’t
match the input.
• examples, token Tab completion
• !!! Provides an error message to use when the original parser doesn’t
match the input.
Processes These methods are used to fork external processes. Note that this
API has been included in the Scala standard library for version 2.9. Process-
Builder is the builder type and Process is the type representing the actual forked
process. The methods to combine processes start with # so that they share the
same precedence.
• run, !, !!, !<, lines, lines_! are different ways to start a process once
it has been defined. The lines variants produce a Stream[String] to obtain
the output lines.
• #<, #<<, #> are used to get input for a process from a source or send the
output of a process to a sink.
• #| is used to pipe output from one process into the input of another.
• #||, #&&, ### sequence processes in different ways.
321