I’ve dug into ElasticSearch lately; primarily for indexing event and logging data via LogStash (a post for another day). It’s not another NoSQL laying claim to the “web scale” DBMS realm; rather carves out its niche by making arbitrary data searchable (very quickly). What really strikes me is the pure simplicity exposed to the end-user/developer; no need for massive amounts of boilerplate, libraries or packages – just the ability to make http/https requests.

Being an Engineer – my concerns lie more on the backend; distribution, clustering, sharding, storage and maintenance. There are some good options; my favorite is curator. The catch with most of them? They assume you’re running linux, python or a combination thereof (not that there’s anything wrong with that – my job just happens to primarily use Windows). While I have no issue installing Python, setting up easy_install and pip (why aren’t these part of the base install!?) – I’m not the only person that might need to run maintenance tasks on the team. I’m also not a huge fan of wasting the time and crayons explaining how to get a utility someone else wrote working.

I’m a huge believer in keeping the solution simple and working with what you already have. Looking around – there’s not much love for Windows in the ElasticSearch world – even less so for PowerShell; so I decided to build a module myself.

 

Enter elasticsearch-maintenance

The code is up on GitHub with a basic usage example script. Once imported a single command becomes available that returns a PSObject providing a number of methods interfacing with the ElasticSearch API.

Syntax: Get-EsIndexesServer string[] –Port int[] –IndexPrefix string[]

In the above examples returned objects are stored in the $Indexes variable. Contents displayed in a table format look like:

Passing the object to Get-Member your output will look similar to:

The interesting bits are the NoteProperty and ScriptMethod items; with which you can compare/measure properties of each index and perform actions against respectively.

 

Keeping it Simple

The provided Example.ps1 does a good job of showing basic functionality within a script when you’re looking to delete indexes over a certain age. The Age property is itself a System.TimeSpan object containing properties;  saving you precious time trying to parse date/time and compare values.

 

Where it Needs Work

I have one nagging issue with the module – namely Age calculation. I’m making assumptions around index naming convention that may or may not work for everyone. As stated earlier since my use is with LogStash data – my indexes are programmatically and uniformly named (i.e. ${IndexPrefix}-${Year}-${Month}-${Day}.${Hour} ). You’ll notice that values after the prefix are in descending order; while this makes a lot of sense for readability and sorting – there exists the possibility that someone doesn’t follow this convention at all. You’ll notice the function uses a RegEx pattern (i.e. ^\w+[-\.](\d+)[-\.](\d+)[-\.](\d+)([-\.](\d+))* ). Optimally – if there is no match whatsoever $span is not returned and Age is left blank – worst case the values technically match but are not accurately derived; potentially disastrous if not properly validated. You’ve been warned. Hopefully I’ll get it worked out soon, but in the meantime if anyone smarter than I comes across this – by all means feel free to create a fork.

 


DIY Dynamic DNS

01Apr13

I’ve been a long time user of Dynamic DNS services; primarily NO-IP. It worked well for me – until recently. I could tolerate having some agent app running on a machine in my network all the time – since DD-WRT has a built-in agent that will run on my low power router.

What I couldn’t forgive was the ‘nagware’ approach they adopted to get me to into the paid tier of their product. Granted – if I like the product I shouldn’t mind paying; a reasonable argument – but I’m cheap. Granted – it’s a unique kind of cheap. I’m paying for domain and hosting services as well as having hardware at home…

For the intentions of this project – you’ll just need domain services. The general idea is to set up a DNS A record that will direct traffic to your dynamically assigned public IP address from your ISP. If you’ve got DSL or a business line – you may have a statically assigned IP address that never changes – making this of little value to you.

Requirements:

  • Control of a Domain (Add/Remove DNS Records)
  • A domain provider that provides an API
  • A Windows machine with PowerShell connected to the internet

My hosting is through DreamHost; one of the many perks is a nice clean API with almost all the hosting features (documented here). The one(s) we’re really interested in are the DNS commands. Our options are list, add and remove. Update is absent – and no – dns-add_record will not quietly overwrite; leaving it up to us to write our own update logic.

Process:

  • Find our WAN IP
  • List the current DNS entries
  • Search for our Dynamic DNS entry
  • Check the IP address of the entry
    • if the same as current WAN IP
    • if different remove the entry and add the updated one
  • Exit

Simple enough, right? PowerShell provides a decent web client – which I’ll wrap in a simple function that can be called by others. DreamHost’s API specification also requires an API key, which we just need to generate, a GUID or UUID and a Client ID (I’m just using the computer name running the script). The basic outline is below.

Naturally if you have a different hosting provider – you’ll need to alter the functions to match your API, but the underlying principles should remain the same. After a quick run through and a few tests – this is what I came up with; it’s short and sweet – lacking some error handling, but for readability sake – it’s sufficient.

 

Source Code


Back with another quick entry. In my last post I created a simple script that would allow the fast and simple migration of a database from one server to another. Taking care of all the annoying considerations like default file paths and other things that just slow down the overall process – particularly avoiding the painfully slow GUI. After testing and proving it could work – I was asked if there was a way to transform this script to help create database mirrors. After digging into the process – I didn’t see why not.

I’ve stuck with a very object oriented approach on this script again – so if you skip down to the runtime section – you’ll get a very good idea of the programmatic flow. The script is largely the same, aside from numerous helper functions to facilitate mirroring. The script automatically retrieves the MSSQL Service users on both sides, creates the endpoint, grants those users connect rights to the endpoint, flips the database to FULL backup mode and does a full and transactional backup and restore.

I don’t know if the do/until loop was the right way to go with this one, but it seemed to make sense and does indeed work; the script will continue to do a transactional backup and restore until the mirror creation succeeds. The only problem I can see with this method is if your organization is using an automated backup routine (like Data Protection Manager) – which would permanently break the log chain – leaving this script in an endless loop until broken. I’ll leave that to someone else to solve for if they encounter the issue – hopefully they’re the sharing type.

Mirror-DB.ps1 <SourceInstance> <DestinationInstance> <Database>

 

 

Source Code


We’ve done a lot of database migrations in the last month or so, and I really found it a terribly tedious process. I get a list of databases (databii?) to move and then I have to sit there performing the backup restore process for each one and making sure they finish before moving on to the next one… no – just no.

Yesterday I spent some time to create a script that handles this pretty deftly – I give it source and destination instances and the database to move and it handles everything. Not to mention it can easily be wrapped in a FOR loop that will work through an entire list of databases to move – and I can do more important things than watch a progress bar. In an extreme case – you could get an array of all the databases on the instance and migrate all of them.

The one thing I hated about the backup/restore process is that the database file paths aren’t necessarily consistent across instances – in any case it’s variable – so I solved for that as well with a simple little case statement; building the restore path for data and logs to their respective location regardless of what instance configuration you run across. If you have some non-standard file extensions they are easily added to the switch statement in the Restore-DB function.

The script itself is fairly basic – I didn’t really bother with any output or error handling yet – but as long as the referenced instances and database are valid and you have rights – it shouldn’t matter. You’ll notice that I’m re-using the Exec-Query function from an old article with a slight variation – the query timeout. I needed longer than the default for a backup/restore operation.

 

Source Code


If you’ve ever needed a quick way to update several servers without the aide of a WSUS server locally and find the idea of logging into them all and clicking through updates as unappealing as I do – this might be a good article for you.

I had some scripts written in batch making some calls to WMI and doing a bunch of other hacky stuff – and didn’t really want to show anyone that shameful stuff. I set out to write it in PowerShell. Looking around online there were a lot of truly monolithic examples; none of which really did what I wanted. The script should simply execute, find updates, install them and launch itself after the user reboots if there are more updates. All of which shouldn’t require input or prompting – isn’t that what scripts are for?

Below is the result; small, simple and autonomous.

 

Source Code


If you’ve used any popular online services in the last few years – you’ve likely received an email or two informing you of a security breach and as a result your password needs to be changed. In many of these cases – your username and password are stored in plain text. A fantastically avoidable – and frankly lazy mistake.

The main tenet of choosing this lack of security is one of two possible mentalities;

  1. Encryption is hard
  2. Thinking your service is impenetrable

Neither of which are true. I set out over a weekend some time ago – just to see how hard it was. I’ll say right out – I’m pretty green with PHP, so when I tell you how long it took to write this from scratch – I want you to really understand how easy encrypting passwords (and optionally usernames) can be. All said and done – I spent 30 minutes writing this login system. For a novice – that’s pretty good – and I have to think someone who codes professionally – it could be done much quicker – and likely result in better code.

The method I chose to go with was SHA256 with a random salt. There are other options out there, but this was the easiest to wrap my head around and implemented quite cleanly. The only issue I have with this method is how the salt is stored with the password hash – so if your database is compromised you’ve only delayed the inevitable at best. But for the sake of brevity – that’s what I’m doing.

Building the User Database

First we need to create our users table. We’re keeping it simple; just enough to store the bare essential user information.

  1. ID
  2. Active Status
  3. Username
  4. Password (Hashed)
  5. Salt

 

To save you some trouble with a chicken and egg scenario – I’ve pre-populated a the user admin with password admin.

Connecting PHP to the Database

Next we need to create the database connection for the table we just created. Create a new empty file and name it db.conf.inc and paste the following into it. Make sure to fill in your real MySQL server IP/Hostname, Username, Password and Database Name.

db.conf.inc

 

Logging in

Next we need to create a page that actually makes use of this connection information and a page to present our login form. Create validate_user.php and login.php. The validation page is doing all the work – and the login page is just the form that posts credentials to it.

validate_user.php

 

login.php

Member Login
Username : <input name=”myusername” type=”text” id=”myusername”>
Password :

 

I’m also going to create a quick and dirty homepage that will force logon for the session and display a bit of content once you’ve done so.

index.php

You should see content here.

 

Alright, so if we were to stop here – you’d have a working index and login page. You can logon with the username admin and password admin. Not bad for a handful of pages – but what if you want to add a new user?

Registration

We’ll need to create two more pages; register.php and new_user.php. Just like the login page – I’ve segmented these into a worker and presenter to keep the code easy to read. new_user is doing all the work; generating our random salt, hashing the password, making sure the passwords entered match, checking the database to make sure we’re not attempting to create a duplicate user – then ultimately inserting that user into the database.

new_user.php

 

register.php

New User
Username : <input name=”myusername” type=”text” id=”myusername”>
Password :
Repeat :

 

Logging out

Presumably at some point you’ll want to expire the sessions created. I’m placing the needed code into yet another file named logout.php. Once the session is destroyed I’m returning to the index.php page – which will in turn return you to login.php.

logout.php

 

Wrapping up

That’s pretty much it. Yes, it needs user management, permission groups – and oh man is it ugly! But a complete built from scratch encrypted login system in under 30 minutes and in less than 10 pages that could be very well reduced down to less than 5. It’s also worth noting that unless the authentication is behind SSL (forced) – you might as well be shouting your credentials out loud; SSL Strip anyone?

What should you be taking away from this?

  1. Encryption isn’t hard
  2. Some admins are just lazy
  3. A decent stub for expanding your own login system
  4. Knowing how important SSL is for online authentication

Source Code


A while back I did an article on WMIC and how you can use it to spawn remote processes on a specified host. It’s a great tool if you need to run something simple on a lot of servers. The one downside – absolutely no interactivity with the remote process. Linux has this taken care of right out of the box with ssh (although you need to set up your shared keys).

That’s where Windows Remote Shell (WinRS) comes in. It doesn’t get a lot of press – being in the shadow of PowerShell, but uses the same underlying technology; WinRM. So, claiming it’s exactly like SSH, or that it’s fully interactive – could be a little misleading. It provides realtime piping of the input/output of the spawned process – it’s not a persistent interactive shell.

Setting up WinRM

Like I said earlier – Remote Shell is reliant on Remote Management for transport. Unless your systems have implicitly enabled WinRM via Group Policy – it’s likely turned off. It’s as simple as running the following command on your target servers to enable it.

 

Who wants to have to log on to every machine for the sole purpose of being able to run remote commands – seems redundant, right? Well – I guess if someone taught you to pass commands to servers using decades old technology that’s always on you could probably take care of that in short order. Like this…

 

Or simply enable the GPO. Once you’ve done that – you’re free to use WinRS whenever you need.

 

There are more options and possibilities to expose the endpoint as a URI and port, rather than simple a NetBIOS name, but should you want to dig into that further – I’m just covering the most basic incarnation.


If you have any customers that require daily backups you’ve likely encountered a few quirks that caught me unaware; lack of backwards compatibility and breakage of the log chain with your disaster recovery backups (to name a few). No problem if you’re in control of the restoration end-point – a luxury I don’t have.

Some time ago we began putting file storage into the database. An architecturally wise decision, but little regard was given to the impact this would have on clients that received regular copies of their backup. Traditional backup methods don’t scale when database size grows ten fold (or more). I can already hear the people screaming about log shipping or similar methods. To them I can only say this – how much do you trust someone else to setup those methods correctly while you’re still held accountable for data/schema consistency? Hence my conundrum.

Time, bandwidth and concurrency just became an expensive commodity. I needed to solve for four important requirements that traditional backup methods couldn’t give me;

  1. Backwards Compatibility
  2. Exclusion of Certain Tables
  3. Preservation of Disaster Recovery Log Chain
  4. Speed

The term compatibility is a bit of a misnomer here – functionality that only exists in MSSQL 2012 isn’t going to magically work in SQL 2005 – but you’ll get a functional replica perfect for reporting purposes. Having said that I might as well get this out of the way now…

THIS IS NOT A REPLACEMENT FOR DISASTER RECOVERY

If you’re brave/stupid enough to rely on it as such – I hope everything works out, I truly do. More to the point, may HR have mercy on your role.

Requirements

To keep things simple – I’m using SQL Server Management Object in a function that calls the scripter class; this means you need SQL Management Studio (or at least the client tools) installed where you intend to run this script (I know, dependencies suck). I’m also using BCP, which is part of the SSMS install. I had toyed with the idea of simply dumping the table contents to CSV, but some data doesn’t translate to flat file.

The Parts

First we’ll be dumping the schema to file(s); tables, views, stored procedures, functions and triggers. The previously mentioned SMO should do most of the heavy lifting. Each component will dumped to its respective file in the specified output directory.

 

Export-Schema {[DBServer\]Instance} {Database} {Output Folder}

Next we need to enumerate the user tables in the target database excluding any we don’t want backed up. I’m using the SQL function I mentioned in this article. I’m sure there’s a better way – I don’t care – I have this lying around.

 

Exec-Query {[DBServer\]Instance} {Database} {Query} [{UserName} {Password}]

Putting it all Together

The script below calls the functions above that dumps all the schema and data to file in the specified output directory. You’ll notice I’m excluding the aforementioned File_Storage table and a few system tables.

 

There are several things you can do from here; like compression and encryption. Reconstituting the data is pretty straightforward, but is something I’ll cover in a follow-up article; this one is already a little long-winded – and I need sleep.

 


Sometimes you need a database – more specifically its data. PowerShell is great application/data glue – moving information between different systems seamlessly.

Some people give scripts a bad wrap because it’s not real programming. They’re right it’s not programming – and it’s not meant to. When it comes to rapid-prototyping a process with large system scale – I can probably have it done by the time they load their IDE.

The function below is designed to be simple and support the object data structure native to PowerShell. I also tend to avoid modules, plugins and any third-party dependencies – all of which cripple portability. This function should drop right in to any script and just work.

 

Usage

Exec-Query “{Databaseserver[\Instance]}” “{DatabaseName}” “{AnySQLStatement}” [[DOMAIN\]UserName] [Password]

The credentials are optional and integrated authentication is used if none are supplied. You can save output to a PowerShell object and evaluated later in your script(s). If you need to run more complex expressions (like from a script file) – simply;

 

Easy! Nothing complex here, no magic – unfortunately there is a lot of bad advice out there for PowerShell – or at least advice written for PowerShell v1 that’s still lingering about.


Most developers think the command line is boring – and if all you ever learned to do was list directories and run a few simple copy commands I can see why you’d think that way. But the truth is a proper grasp of the core system allows a fantastic degree of control. If you’ve ever needed to execute some simple tasks on every server/desktop remotely, you’ve probably turned to third-party utilities or PowerShell. Most of which make some assumptions that the target versions are all the same, or that agents be installed on your target. Truthfully the tool you’ve needed is built into windows for over a decade, and it gets very little attention.

Windows Management Instrumentation (WMI)

WMI is the API that allows access to Windows low-level systems and information. CPU utilization, free memory, number of drives and their free space – this can all be easily retrieved from WMI. But it’s not just all read-only data – you can make changes as well. It’s a pretty dull concept until you realize that WMI extends across all Windows systems and allows this level of control and access between each one.

WMIC.EXE

WMIC is the command line utility that allows access to all things WMI. It’s capable of being extended far beyond what I’m detailing here, but I can’t find a use case that isn’t terribly specific my work environment – so I’ll try to keep it as generic and accessible as possible.

I didn’t find WMIC that exciting until I found the /NODE parameter. This allows query and execute against a remote host. Want to run a command against another computer?

 

Want a list of running services?

 

Have an out of control JRun process? Why not kill it?

 

Need to run defragmentation on a whole list of servers from a text file? BOOM!

 

Disable the Themes service to annoy friends and co-workers?

 

This all operates under the assumption you’re logged on as a user that has adequate rights to the target. If you’re attempting between computers that have no shared credentials, but you know a username and password.