Adapting and Using Existing Data with Sencha Touch 2

This post is a follow up to my talk at London Sencha Touch User Group on 11th July. The talk was about the steps I took to adapt the existing events data on www.artsfest.org.uk to be used by the Sencha Touch 2 app I have developed to list Artsfest events in a more mobile friendly way. The prototype for this is now at www.weloveit.so/artsfest2. I will go through the following in this post:

The Problem

1. How to adapt and reduce existing data to the data you actually need

2. How to import this data into a MongoDB database

3. How to access the data through RESTful urls in a Node.js app

4. How to set up a MongoHQ database and export the data to it

5. How to put the Node.js app onto heroku p

6. Example of a Sencha Touch2 app calling these urls as the proxy for the data.

All code is available on my github. For the data setup up on MongoDB please look at the sourceJSON. For the Node.js app its the Nodejs.Code repo and finally for the Sencha Touch code, as you might have guessed, its the SenchaCode repo.

The Problem

There are lots of examples of how to build demo apps from the ground up for Sencha Touch 2 (ST2). But what if you are using data from an existing website or application and you cannot pull the data direct from the existing site (as is the case with Artsfest). So this post presumes you have no control over the data. You are simply acquiring it and using it for your own means. This was the dilemma I had when creating my ST2 app. There are several issues at play here, the most important are:

– Options for data to be served to the ST2 app:

The choices you have (as far as I’m aware) for data formats that can be used by Sencha are YQL, json or using AJAX.

– How to convert the source data to what you want to serve to the ST2 app.

For this project the choice of data is fairly straightforward as the artsfest website can feed json data, so I went with json to supply to the ST 2 app as it wouldn’t then take much to convert the existing data to that which is more suitable for my needs. But what if your data source is a SQLServer or MySQL database? To be honest I haven’t looked too much into how to convert conventional database data to JSON. It seems with newer technologies such as this MVC example you can convert to JSON quite easily. But this presumes you have control over the complete back end, a luxury I don’t have. A quick google found this for a SQL Server solution and this for a MySQL solution of converting to json. More thorough research, I’m sure, would reveal possibly better options. Anyone with suggestions for this or something to add, then please leave a comment.

1. How to adapt and reduce exisiting data to the data you need

artsfest.org.uk kindly provies urls to obtain pure data from the site. The main url is here, which details several ways to obtain the data in dfferent formats.

As mentioned, for this project I wanted data in a json format so used a url similar to the example (http://www.artsfest.org.uk/api/performances/?start_date_before=20100110&format=json).
The data returned was everything I required but there was too much data. A lot of repetition. I only wanted a fraction of what was supplied for a more lightweight database for the mobile web app.

For the app I wanted two Data Entities

Markers – the locations where events are hosted
Events – each one would have a marker ID

To reduce the data I used a .NET JSON parser, which I adapted, compiled and executed locally. This gave me the two JSON files, one for each entity. The parser basically picked up certain fields I thought were most relevant (i.e. Events had fields like description, title, date of event, Markers had fields like long, lat, title, description) and put these in the new json, and ignored the rest. I won’t go into more detail on this process as this is not the focus of the demo. If you are interested in this process then please get in touch.

2. How to import this data into a MongoDB database

Now I have a reduced dataset I want to import the data into my local MongoDB database. First of all you must ensure that each row (document in json world) should begin on a new line in each json document or this process doesn’t appear to work. Once you have the files formatted, you are ready to import them into MonbgoDB. I’m presuming at this time you have mongodb installed locally. To do the import, open a terminal window navigate to the folder containing the events.json and markers.json files, then execute the two commands:

$mongoimport -db demo -c markers markers.json
$mongoimport -db demo -c events events.json

…where demo is the mongodb database and -c argument is the collection and then finally the json file you are importing from. You should get a message in the terminal similar to:

connected to: 127.0.0.1
imported 137 objects

This confirms that the data was imported. Note that I didn’t have to create the demo database beforehand. One of the great things about MongoDB is the way you can implicitly create databases and collections without any setup. To test the import went ok and to view the database, open a separate command window and execute “mongo” to open a connection to your local instance of mongoDB. Once you are connected, switch to your newly created “demo” database by using the mongo command “use demo”. To list your markers, execute “db.markers.find()”, to list events; “db.events.find()”. This query reference gives you more query ideas.

3. How to access the data through RESTful urls in a Node.js app

So the MongoDB is working ok. The next step is to access the data through a Node.js application. To create a Node.js app is not as daunting for a complete newbie as it sounds thanks mainly to the express framework. I recommend installing it then following the terminal commands below to instantly create a Node.js app that you can use straightaway.

alex@ubuntu:~$ express /tmp/foo && cd /tmp/foo

create : /tmp/foo
create : /tmp/foo/package.json
create : /tmp/foo/app.js
create : /tmp/foo/public
create : /tmp/foo/routes
create : /tmp/foo/routes/index.js
create : /tmp/foo/views
create : /tmp/foo/views/layout.jade
create : /tmp/foo/views/index.jade
create : /tmp/foo/public/javascripts
create : /tmp/foo/public/images
create : /tmp/foo/public/stylesheets
create : /tmp/foo/public/stylesheets/style.css

dont forget to install dependencies:
$ cd /tmp/foo && npm install

alex@ubuntu:/tmp/foo$ npm install
jade@0.26.3 ./node_modules/jade
├── commander@0.6.1
└── mkdirp@0.3.0
express@2.5.5 ./node_modules/express
├── mkdirp@0.0.7
├── mime@1.2.6
├── qs@0.5.0
└── connect@1.9.1
alex@ubuntu:/tmp/foo$ node app.js
Express server listening on port 3000 in development mode

This can then be adapted by editing the app.js file so that you have RESTful routes (urls) to access the MongoDB data with. The app.js file can be found in the Node folder on the github repository. Once you start the node.js app locally, (and ensuring you have the mongoDB databse demo running) the routes provided to access the data are :

http://localhost:5000/markers?callback=callback1 to return all the markers
http://localhost:5000/events?callback=callback1 to return all the events
http://localhost:5000/events/7?callback=callback1 to return events for a specific markerid (7)

4. How to set up a MongoHQ database and export the data to it

Now you have the data working for you locally, you can set it up to be accessed over the web. MongoHQ provides hosting for MongoDB databases and best of all there is a free “sandbox” option for the hosting of each database.All you need to do is sign up, create a database (I created a database called “senchaDemo”) and then set up a new user (“test”) with a password. Next, go to the Database Info tab. Just make a note of the Mongo URI such as mongodb://:@environment.mongohq.com:12345/senchaDemo, which can then be used in the mongoose.connect call in your Node.js app.js file. Next thing to so is to export the data from your local MongoDB database to this new database. The instructions are on the MongoHQ site, so I won’t repeat them here. The final thing to do is to hook the local node.js app up to your new remote MongoDB by changing the mongoose.connect connection string from local:

mongoose.connect('mongodb://127.0.0.1/demo');

to a remote connection

mongoose.connect('mongodb://test:test@staff.mongohq.com:10081/senchaDemo');

where test is the mongoHQ database user I set up and test is its imaginiative password.

5. How to put the Node.js app onto heroku

Now the data is online and ready to use, the next step is to put your node.js app onto heroku or any cloud based service but I am going with heroku as its very reliable and free to use the basic service. The first thing to do is to add the node.js code to a git repository. I always forget this process. This blog post is as good as any for showing you what to do on your local machine, beginning with git init. Once you have created the git repository you can push it onto heroku. First create the remote application with this terminal command (responses from heroku also included:

alex@ubuntu:~/NodeSenchaDemo/Node$ heroku apps:create -s cedar
Creating high-frost-8468... done, stack is cedar
http://high-frost-8468.herokuapp.com/ | git@heroku.com:high-frost-8468.git
Git remote heroku added

next push to heroku

alex@ubuntu:~/NodeSenchaDemo/Node$ git push heroku master

then you must scale the process

alex@ubuntu:~/NodeSenchaDemo/Node$ heroku ps:scale web=1

then you can check if the process is running correctly

alex@ubuntu:~/NodeSenchaDemo/Node$ heroku ps

which will give you the output of

=== web: `node web.js`
web.1: up for 10s

you can also check the log files to see if they report any issues
alex@ubuntu:~/NodeSenchaDemo/Node$ heroku logs

…hopefully they won’t and now you have a working node.js app hosted in the cloud.

6. Example of a Sencha Touch 2 app calling these urls as the proxy for the data.

Now we have the data supplied through the Node.js app, we can use it in a ST2 app. Sencha Touch 2 can use JSON or JSONP as its proxy. I have put the ST2 demo application that uses the data at http://www.weloveit.so/nodeSenchaDemo/ and the code can be found at github. The Markers.js store file shows how to set up the proxy file…

Ext.define('demo.store.Markers', {
extend: "Ext.data.Store",
config: {
storeId: 'markerStore',
model: "demo.model.Marker",
autoLoad: true,
proxy: {
type: 'jsonp',
callbackKey: 'callback',
url: 'http://high-frost-8468.herokuapp.com/markers',
reader: {
type: 'json',
rootProperty: 'data',
successProperty: 'success'
}
}
}
});

Note the type is ‘jsonp’. JSONP, enables you to get data from a different domain/server. So we can get the data from the heroku app while hosting the ST2 app on a different domain.

The demo ST2 app is a simple tabPanel application to show how to call each one of the three urls that the Node.js app can supply. If you are completely new to Sencha Touch, this video will give you the basics on tabPanels.

I hope you find this info useful. If you are scratching your head over anything in this post or need more info then please leave a comment. Any feedback would be appreciated.

SQL Server 2000 script for file space usage

script below taken from http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=89058 shows physical file info for databases hosted on an SQL Server instance

use master
go
if exists ( select * from tempdb.dbo.sysobjects o
where o.xtype in ('U') and o.id = object_id( N'tempdb..#DB_FILE_INFO' ))
drop table #DB_FILE_INFO
go

if exists ( select * from tempdb.dbo.sysobjects o
where o.xtype in ('U') and o.id = object_id( N'tempdb..#DB_INFO' ))
drop table #DB_INFO
go
set nocount on
go
create table #DB_FILE_INFO (
[ID] int not null
identity (1, 1) primary key clustered ,
[DATABASE_NAME] sysname not null ,
[FILEGROUP_TYPE] nvarchar(4) not null ,
[FILEGROUP_ID] smallint not null ,
[FILEGROUP] sysname not null ,
[FILEID] smallint not null ,
[FILENAME] sysname not null ,
[DISK] nvarchar(1) not null ,
[FILEPATH] nvarchar(260) not null ,
[MAX_FILE_SIZE] int null ,
[FILE_SIZE] int not null ,
[FILE_SIZE_USED] int not null ,
[FILE_SIZE_UNUSED] int not null ,
[DATA_SIZE] int not null ,
[DATA_SIZE_USED] int not null ,
[DATA_SIZE_UNUSED] int not null ,
[LOG_SIZE] int not null ,
[LOG_SIZE_USED] int not null ,
[LOG_SIZE_UNUSED] int not null ,
)
go

declare @sql nvarchar(4000)
set @sql =
'use ['+'?'+'] ;
if db_name() N''?'' goto Error_Exit

insert into #DB_FILE_INFO
(
[DATABASE_NAME],
[FILEGROUP_TYPE],
[FILEGROUP_ID],
[FILEGROUP],
[FILEID],
[FILENAME],
[DISK],
[FILEPATH],
[MAX_FILE_SIZE],
[FILE_SIZE],
[FILE_SIZE_USED],
[FILE_SIZE_UNUSED],
[DATA_SIZE],
[DATA_SIZE_USED],
[DATA_SIZE_UNUSED],
[LOG_SIZE],
[LOG_SIZE_USED],
[LOG_SIZE_UNUSED]
)
select top 100 percent
[DATABASE_NAME] = db_name(),
[FILEGROUP_TYPE] = case when a.groupid = 0 then ''Log'' else ''Data'' end,
[FILEGROUP_ID] = a.groupid,
a.[FILEGROUP],
[FILEID] = a.fileid,
[FILENAME] = a.name,
[DISK] = upper(substring(a.filename,1,1)),
[FILEPATH] = a.filename,
[MAX_FILE_SIZE] =
convert(int,round(
(case a.maxsize when -1 then null else a.maxsize end*1.000)/128.000
,0)),
[FILE_SIZE] = a.[fl_size],
[FILE_SIZE_USED] = a.[fl_used],
[FILE_SIZE_UNUSED] = a.[fl_unused],
[DATA_SIZE] = case when a.groupid 0 then a.[fl_size] else 0 end,
[DATA_SIZE_USED] = case when a.groupid 0 then a.[fl_used] else 0 end,
[DATA_SIZE_UNUSED] = case when a.groupid 0 then a.[fl_unused] else 0 end,
[LOG_SIZE] = case when a.groupid = 0 then a.[fl_size] else 0 end,
[LOG_SIZE_USED] = case when a.groupid = 0 then a.[fl_used] else 0 end,
[LOG_SIZE_UNUSED] = case when a.groupid = 0 then a.[fl_unused] else 0 end
from
(
Select
aa.*,
[FILEGROUP] = isnull(bb.groupname,''''),
-- All sizes are calculated in MB
[fl_size] =
convert(int,round((aa.size*1.000)/128.000,0)),
[fl_used] =
convert(int,round(fileproperty(aa.name,''SpaceUsed'')/128.000,0)),
[fl_unused] =
convert(int,round((aa.size-fileproperty(aa.name,''SpaceUsed''))/128.000,0))
from
dbo.sysfiles aa
left join
dbo.sysfilegroups bb
on ( aa.groupid = bb.groupid )
) a
order by
case when a.groupid = 0 then 0 else 1 end,
a.[FILEGROUP],
a.name

Error_Exit:

'

--print @sql

exec sp_msforeachdb @sql

--select * from #DB_FILE_INFO

declare @DATABASE_NAME_LEN varchar(20)
declare @FILEGROUP_LEN varchar(20)
declare @FILENAME_LEN varchar(20)
declare @FILEPATH_LEN varchar(20)

select
@DATABASE_NAME_LEN = convert(varchar(20),max(len(rtrim(DATABASE_NAME)))),
@FILEGROUP_LEN = convert(varchar(20),max(len(rtrim(FILEGROUP)))),
@FILENAME_LEN = convert(varchar(20),max(len(rtrim(FILENAME)))),
@FILEPATH_LEN = convert(varchar(20),max(len(rtrim(FILEPATH))))
from
#DB_FILE_INFO

if object_id('tempdb..##DB_Size_Info_D115CA380E2B4538B6CBBB51') is not null
begin
drop table ##DB_Size_Info_D115CA380E2B4538B6CBBB51
end

-- Setup code to reduce column sizes to max used
set @sql =
'
select
[DATABASE_NAME] = convert(varchar('+@DATABASE_NAME_LEN+'), a.[DATABASE_NAME] ),
a.[FILEGROUP_TYPE],
[FILEGROUP_ID],
[FILEGROUP] = convert(varchar('+@FILEGROUP_LEN+'), a.[FILEGROUP]),
[FILEID],
[FILENAME] = convert(varchar('+@FILENAME_LEN+'), a.[FILENAME] ),
a.[DISK],
[FILEPATH] = convert(varchar('+@FILEPATH_LEN+'), a.[FILEPATH] ),
a.[MAX_FILE_SIZE],
a.[FILE_SIZE],
a.[FILE_SIZE_USED],
a.[FILE_SIZE_UNUSED],
FILE_USED_PCT =
convert(numeric(5,1),round(
case
when a.[FILE_SIZE] is null or a.[FILE_SIZE] = 0
then NULL
else (100.00000*a.[FILE_SIZE_USED])/(1.00000*a.[FILE_SIZE])
end ,1)) ,
a.[DATA_SIZE],
a.[DATA_SIZE_USED],
a.[DATA_SIZE_UNUSED],
a.[LOG_SIZE],
a.[LOG_SIZE_USED],
a.[LOG_SIZE_UNUSED]
into
##DB_Size_Info_D115CA380E2B4538B6CBBB51
from
#DB_FILE_INFO a
order by
a.[DATABASE_NAME],
case a.[FILEGROUP_ID] when 0 then 0 else 1 end,
a.[FILENAME]
'

--print @sql

exec ( @sql )

select top 100 percent
*
into
#DB_INFO
from
##DB_Size_Info_D115CA380E2B4538B6CBBB51 a
order by
a.[DATABASE_NAME],
case a.[FILEGROUP_ID] when 0 then 0 else 1 end,
a.[FILENAME]

drop table ##DB_Size_Info_D115CA380E2B4538B6CBBB51

set nocount off

print 'Show Details'
select * from #DB_INFO

print 'Total by Database and File'
select
[DATABASE_NAME] = isnull([DATABASE_NAME],' All Databases'),
[FILENAME] = isnull([FILENAME],''),
FILE_SIZE = sum(FILE_SIZE),
FILE_SIZE_USED = sum(FILE_SIZE_USED),
FILE_SIZE_UNUSED = sum(FILE_SIZE_UNUSED),
FILE_USED_PCT =
convert(numeric(5,1),round(
case
when sum(a.[FILE_SIZE]) is null or sum(a.[FILE_SIZE]) = 0
then NULL
else (100.00000*sum(a.[FILE_SIZE_USED]))/(1.00000*sum(a.[FILE_SIZE]))
end ,1)) ,
DATA_SIZE = sum(DATA_SIZE),
DATA_SIZE_USED = sum(DATA_SIZE_USED),
DATA_SIZE_UNUSED = sum(DATA_SIZE_UNUSED),
LOG_SIZE = sum(LOG_SIZE),
LOG_SIZE_USED = sum(LOG_SIZE_USED),
LOG_SIZE_UNUSED = sum(LOG_SIZE_UNUSED)
from
#DB_INFO a
group by
[DATABASE_NAME],
[FILENAME]
with rollup
order by
case when [DATABASE_NAME] is null then 1 else 0 end ,
[DATABASE_NAME],
case when [FILENAME] is null then 1 else 0 end ,
[FILENAME]

print 'Total by Database and Filegroup'

select
--[Server] = convert(varchar(15),@@servername),
[DATABASE_NAME] = isnull([DATABASE_NAME],'** Total **'),
[FILEGROUP] =
case when [FILEGROUP] is null then '' when [FILEGROUP] = '' then 'LOG' else [FILEGROUP] end,
FILE_SIZE = sum(FILE_SIZE),
FILE_SIZE_USED = sum(FILE_SIZE_USED),
FILE_SIZE_UNUSED = sum(FILE_SIZE_UNUSED),
FILE_USED_PCT =
convert(numeric(5,1),round(
case
when sum(a.[FILE_SIZE]) is null or sum(a.[FILE_SIZE]) = 0
then NULL
else (100.00000*sum(a.[FILE_SIZE_USED]))/(1.00000*sum(a.[FILE_SIZE]))
end ,1)) ,
--MAX_SIZE = SUM([MAX_FILE_SIZE]),
DATA_SIZE = sum(DATA_SIZE),
DATA_SIZE_USED = sum(DATA_SIZE_USED),
DATA_SIZE_UNUSED = sum(DATA_SIZE_UNUSED),
LOG_SIZE = sum(LOG_SIZE),
LOG_SIZE_USED = sum(LOG_SIZE_USED),
LOG_SIZE_USED = sum(LOG_SIZE_UNUSED)
from
#DB_INFO A
group by
[DATABASE_NAME],
[FILEGROUP]
with rollup
order by
case when [DATABASE_NAME] is null then 1 else 0 end ,
[DATABASE_NAME],
case when [FILEGROUP] is null then 10 when [FILEGROUP] = '' then 0 else 1 end ,
[FILEGROUP]

print 'Total by Database and Filegroup Type'

select
--[Server] = convert(varchar(15),@@servername),
[DATABASE_NAME] = isnull([DATABASE_NAME],'** Total **'),
[FILEGROUP_TYPE] = isnull([FILEGROUP_TYPE],''),
FILE_SIZE = sum(FILE_SIZE),
FILE_SIZE_USED = sum(FILE_SIZE_USED),
FILE_SIZE_UNUSED = sum(FILE_SIZE_UNUSED),
FILE_USED_PCT =
convert(numeric(5,1),round(
case
when sum(a.[FILE_SIZE]) is null or sum(a.[FILE_SIZE]) = 0
then NULL
else (100.00000*sum(a.[FILE_SIZE_USED]))/(1.00000*sum(a.[FILE_SIZE]))
end ,1)) ,
DATA_SIZE = sum(DATA_SIZE),
DATA_SIZE_USED = sum(DATA_SIZE_USED),
DATA_SIZE_UNUSED = sum(DATA_SIZE_UNUSED),
LOG_SIZE = sum(LOG_SIZE),
LOG_SIZE_USED = sum(LOG_SIZE_USED),
LOG_SIZE_USED = sum(LOG_SIZE_UNUSED)
from
#DB_INFO A
group by
[DATABASE_NAME],
[FILEGROUP_TYPE]
with rollup
order by
case when [DATABASE_NAME] is null then 1 else 0 end ,
[DATABASE_NAME],
case when [FILEGROUP_TYPE] is null then 10 when [FILEGROUP_TYPE] = 'Log' then 0 else 1 end

print 'Total by Disk, Database, and Filepath'
select
[DISK] = isnull([DISK],''),
[DATABASE_NAME] = isnull([DATABASE_NAME],''),
[FILEPATH] = isnull([FILEPATH],''),
FILE_SIZE = sum(FILE_SIZE),
FILE_SIZE_USED = sum(FILE_SIZE_USED),
FILE_SIZE_UNUSED = sum(FILE_SIZE_UNUSED),
FILE_USED_PCT =
convert(numeric(5,1),round(
case
when sum(a.[FILE_SIZE]) is null or sum(a.[FILE_SIZE]) = 0
then NULL
else (100.00000*sum(a.[FILE_SIZE_USED]))/(1.00000*sum(a.[FILE_SIZE]))
end ,1)) ,
DATA_SIZE = sum(DATA_SIZE),
DATA_SIZE_USED = sum(DATA_SIZE_USED),
DATA_SIZE_UNUSED = sum(DATA_SIZE_UNUSED),
LOG_SIZE = sum(LOG_SIZE),
LOG_SIZE_USED = sum(LOG_SIZE_USED),
LOG_SIZE_UNUSED = sum(LOG_SIZE_UNUSED)
from
#DB_INFO a
group by
[DISK],
[DATABASE_NAME],
[FILEPATH]
with rollup
order by
case when [DISK] is null then 1 else 0 end ,
[DISK],
case when [DATABASE_NAME] is null then 1 else 0 end ,
[DATABASE_NAME],
case when [FILEPATH] is null then 1 else 0 end ,
[FILEPATH]

print 'Total by Disk and Database'
select
[DISK] = isnull([DISK],''),
[DATABASE_NAME] = isnull([DATABASE_NAME],''),
FILE_SIZE = sum(FILE_SIZE),
FILE_SIZE_USED = sum(FILE_SIZE_USED),
FILE_SIZE_UNUSED = sum(FILE_SIZE_UNUSED),
FILE_USED_PCT =
convert(numeric(5,1),round(
case
when sum(a.[FILE_SIZE]) is null or sum(a.[FILE_SIZE]) = 0
then NULL
else (100.00000*sum(a.[FILE_SIZE_USED]))/(1.00000*sum(a.[FILE_SIZE]))
end ,1)) ,
DATA_SIZE = sum(DATA_SIZE),
DATA_SIZE_USED = sum(DATA_SIZE_USED),
DATA_SIZE_UNUSED = sum(DATA_SIZE_UNUSED),
LOG_SIZE = sum(LOG_SIZE),
LOG_SIZE_USED = sum(LOG_SIZE_USED),
LOG_SIZE_USED = sum(LOG_SIZE_UNUSED)
from
#DB_INFO a
group by
[DISK],
[DATABASE_NAME]
with rollup
order by
case when [DISK] is null then 1 else 0 end ,
[DISK],
case when [DATABASE_NAME] is null then 1 else 0 end ,
[DATABASE_NAME]

print 'Total by Disk'
select
[DISK] = isnull([DISK],''),
FILE_SIZE = sum(FILE_SIZE),
FILE_SIZE_USED = sum(FILE_SIZE_USED),
FILE_SIZE_UNUSED = sum(FILE_SIZE_UNUSED),
FILE_USED_PCT =
convert(numeric(5,1),round(
case
when sum(a.[FILE_SIZE]) is null or sum(a.[FILE_SIZE]) = 0
then NULL
else (100.00000*sum(a.[FILE_SIZE_USED]))/(1.00000*sum(a.[FILE_SIZE]))
end ,1)) ,
DATA_SIZE = sum(DATA_SIZE),
DATA_SIZE_USED = sum(DATA_SIZE_USED),
DATA_SIZE_UNUSED = sum(DATA_SIZE_UNUSED),
LOG_SIZE = sum(LOG_SIZE),
LOG_SIZE_USED = sum(LOG_SIZE_USED),
LOG_SIZE_USED = sum(LOG_SIZE_UNUSED)
from
#DB_INFO a
group by
[DISK]
with rollup
order by
case when [DISK] is null then 1 else 0 end ,
[DISK]

print 'Total by Database'
select
--[Server] = convert(varchar(20),@@servername),
[DATABASE_NAME] = isnull([DATABASE_NAME],'** Total **'),
FILE_SIZE = sum(FILE_SIZE),
FILE_SIZE_USED = sum(FILE_SIZE_USED),
FILE_SIZE_UNUSED = sum(FILE_SIZE_UNUSED),
FILE_USED_PCT =
convert(numeric(5,1),round(
case
when sum(a.[FILE_SIZE]) is null or sum(a.[FILE_SIZE]) = 0
then NULL
else (100.00000*sum(a.[FILE_SIZE_USED]))/(1.00000*sum(a.[FILE_SIZE]))
end ,1)) ,
DATA_SIZE = sum(DATA_SIZE),
DATA_SIZE_USED = sum(DATA_SIZE_USED),
DATA_SIZE_UNUSED = sum(DATA_SIZE_UNUSED),
LOG_SIZE = sum(LOG_SIZE),
LOG_SIZE_USED = sum(LOG_SIZE_USED),
LOG_SIZE_UNUSED = sum(LOG_SIZE_UNUSED)
from
#DB_INFO A
group by
[DATABASE_NAME]
with rollup
order by
case when [DATABASE_NAME] is null then 1 else 0 end ,
[DATABASE_NAME]