Sunday, May 3, 2015

Adding schema when using ASP.NET 5 Identity with Entity Framework 7

The post is regarding bleeding edge technology(Entity Framework 7-beta 4), and can be outdated in a foreseen future.

Good database design is when responsibilities are separated, or else your ends up with a monolith trash bin database. The best way to achieve separation of responsibilities, is to have each responsibility in its own database, and disable cross querying. That will be a database for Accounts, Products, Emails etc depending of your business and its domain. One database for each domain.

In these cloud days, that be quite expensive. So we can settle with the next best. Schemas. The must famous schema on SQL Server is dbo. It is the default schema, and sadly, it is used in 99% of the cases, when schemas is applied.

When using ASP.NET 5 Identity with Entity Framework 7 and with Migrations, you will see the tables are putted in the dbo schema. Changing this behavior, is not straight forward.

The solution

You might recognize it. It is the code from template when creating an ASP.NET 5 Web site. Thou I have removed a little. I want to have the identity tables in the schema Accounts.

1. First override the OnModelCreating(Modelbuilder builder) method.

 using Microsoft.AspNet.Identity.EntityFramework;  
 using Microsoft.Data.Entity;  
 namespace Example.Models  
 {  
   public class ApplicationUser : IdentityUser  
   {  
   }  
   public class ApplicationDbContext : IdentityDbContext<ApplicationUser>  
   {  
     public ApplicationDbContext()  
     {  
     }  
     protected override void OnModelCreating(ModelBuilder builder)  
     {  
       // Remenber to Create Schema in DB, until EF7 can handle Schemas correctly  
       builder.Entity<ApplicationUser>().ForRelational().Table("AspNetUsers", "Accounts");  
       builder.Entity<IdentityUserClaim<string>>().ForRelational().Table("AspNetUserClaims", "Accounts");  
       builder.Entity<IdentityUserLogin<string>>().ForRelational().Table("AspNetUserLogins", "Accounts");  
       builder.Entity<IdentityUserRole<string>>().ForRelational().Table("AspNetUserRoles", "Accounts");  
       builder.Entity<IdentityRole>().ForRelational().Table("AspNetRoles", "Accounts");  
       builder.Entity<IdentityRoleClaim<string>>().ForRelational().Table("AspNetRoleClaims", "Accounts");  
       base.OnModelCreating(builder);  
     }  
   }  
 }  

2. This step might not seem gracefully, because it should be fully handled by Migrations in Entity Framework. But schemas and Entity Framework 7, is currently not working as desired. And it is not working with Migrations

Go to SQL Server Management Studio. If your database is not created at this point, create it. Then run CREATE SCHEMA <schema name>. In this case it will be CREATE SCHEMA Accounts. As mentioned, this part should have been handled by Migrations.

3. Run Migrations

4. Continue with your project :-)

Taken this a step further, it could be considered; having a DbContext for each domain of you app, and each of these contexts could have their own schemas.

Wednesday, April 29, 2015

Keeping your Azure Website warm and up to speed

So, you have deployed a web app to an Azure Website. As one might expect, the first web site request is slow, it might take 10 seconds  or maybe even more to respond. It is because the web site is unloaded(cold) and it has to load in(warm up). This first request loads the site, and when it is loaded, it responses quite fast(depending on our code).

But after a period(around 30 minutes) with idle traffic, your web site unloads again. And again to get it loaded again, it needs an request. And again it takes time to load it.

In the Basic and Standard plans for Azure websites, you can disable this feature by setting the Always On option. That was a quick fix :-). If you are using Free or Shared Azure Websites, you can consider following strategies:
  1. Do nothing, if you can live with it. Also If/When your site is frequently visited, it is not an problem. It is only websites with low traffic, such as new sites, suffers from this issue.
  2. Use one of these 'Ping my add' services on the web to request your site. I'm quite sure, this is a solution you should avoid.
  3. Find/own/borrow/invent a machine which is on 24/7 and setup a job to make a request to your site every 5-10 minute.
  4. Keep it all in Azure, Create an Azure Webjob to make a request to your site every 5-10 minute.

I'll explain solution 4, it might have some cons regarding pricing, but I'll cover that.

Azure WebJobs

Azure WebJobs can handle following extensions:

.cmd, .bat, .exe, .ps1, .sh, .php, .py, .js, .jar

I'll show an example, with C# where we are making a console .exe file. The Azure WebJob code:

 using System.Net;  
 namespace HeartBeat  
 {  
   class Program  
   {  
     static void Main(string[] args)  
     {  
       var WebReq = (HttpWebRequest)WebRequest.Create(string.Format("http://<your site>/special_ping_endpoint"));  
       WebReq.Method = "GET";  
       WebReq.GetResponse();  
     }  
   }  
 }  

Now, this is important. Do not ping your landing page, eg. www.example.com, it might cost more resources, specially if you landing page makes web call and database lookups behind the scenes. It might be noticeable on your Azure bill. Create a special minimal endpoint for the purpose, and make it return empty.

An example of such endpoint in

ASP.NET MVC 5 and prior
 using System.Web.Mvc;  
 namespace ActionHandlers.Controllers  
 {  
   public class PingController : Controller  
   {  
     public ActionResult Get()  
     {  
       return new EmptyResult();  
     }  
   }  
 }  


Or ASP.NET 5 MVC 6
 using Microsoft.AspNet.Mvc;  
 namespace ActionHandlers.Controllers  
 {  
   public class ExampleController : Controller  
   {  
     public IActionResult Get()  
     {  
       return new EmptyResult();  
     }  
   }  
 }  


Azure WebJob considerations regarding costs

The WebJob is the thing which properly is going to cost you, but it depends. Azure WebJobs is dependent of Azure Scheduler, and Azure Scheduler comes in 3 plans: Free, Standard and Premium. The biggest difference regarding to our case, is how frequent a job can run. With the free plan, a job can run once in an hour, while for Stardard and Premium, they can run once in a minute.

So with a newly created web site, it would be optimal with the standard plan and ping every 5-10 minute, for keeping your site warm. But it is a bit pricy. But you could use the free plan and do a ping every hour and hoping to have a visit after 15-30 minuttes after the ping, then your site is properly warm until next ping. You could consider following strategies, for keeping your site warm.
  1. Completely new and fresh website: Traffic is going to be very light, use the standard plan
  2. Website with light traffic: Use the free plan
  3. Website with often traffic: No plan.
Using Free or Shared Websites with a Azure Scheduler, it still cheaper than switching to Basic Websites and use Always On. Also, using a scheduler should only be a temporary solution, until you site has good traffic. 

Alternative WebJob way

Making an Azure WebJob, with a thread sleep for 5-10 minuttes and run it continuously without a scheduler, is not a recommendable solution. Because Azure is able to unload websites with associated unscheduled WebJobs.

WebJob Installation

Put you WebJob code into a zip, in our case it will be the compiled exe and config file, from either the Debug or Release folder in your Visual Studio Project (I'll take the liberty to assume, your using Visual Studio).

Go to the Dashboard for you Website, and find the WebJob tab. Add the WebJob.

Custom Action Results in ASP.NET 5 (VNEXT) (MVC6)

Before we start, you should be aware of this. This post is based on a the ASP.NET 5 version, Visual Studio 2015 CTP 6 pulls down, when it creates an ASP.NET 5 Project. Meaning it is a pre-release of ASP.NET 5. Things in ASP.NET 5 can change and outdate this post. It is highly unlikely, but it can happen.

Even thou I keep referring MVC 6, it is still a post regarding ASP.NET 5 or ASP.NET vNext, it's the same, and MVC 6 is a part of ASP.NET MVC 5.

Why would I write a custom ActionResult

Yes, good question. There is plenty supported in ASP.NET 5. But sometimes you ends up in a situation, where you need something special. When I developed Your Favorite Snippet Tool, I needed to transfer binary in a certain way, to provide best user experience. I created a custom ActionResult to handle it.

ActionResult in MVC 6 compared to earlier versions

ActionResults have changed a bit, since the prior versions of MVC. Yes, you still have to inherit from ActionResult and yes you still have to override a ExecuteResult method, when making custom ActionResults.

The most noticeable difference, is that in MVC 6 ExecuteResult have another signature compared to prior version, and there is also a ExecuteResultAsync added.

ExecuteResult for ASP.NET MVC 5 and Prior


public abstract void ExecuteResult(ControllerContext context)

ExecuteResult for ASP.NET MVC 6


public virtual Task ExecuteResultAsync(ActionContext context)
public virtual void ExecuteResult(ActionContext context)

Two thing you might notice, the methods in MVC 6 are using virtual methods, and ActionContext instead of ControllerContext. There is nothing much to say about the contexts, they are very similar. By using virtual method there is no override constrains. It means that you can override either ExecuteResultAsync, ExecuteResult or both, but are not forced to.

Which to override, ExecuteResultAsync or ExecuteResult

It depends, but preferly ExecuteResultAsync, because it is the one which is called. Inside ActionResult, which you have to inherit from, following logic is happening:

 public abstract class ActionResult : IActionResult  
   {  
     public virtual Task ExecuteResultAsync(ActionContext context)  
     {  
       ExecuteResult(context);  
       return Task.FromResult(true);  
     }  
     public virtual void ExecuteResult(ActionContext context)  
     {  
     }  
   }  

So you see, ExecuteResultAsync is still called even thou you just override ExecuteResult. Plus, it will not make sense, to enforce overrides of the 2 methods.


Show me some code

I have made a string writer result, not the most exciting example, but it proves the point.

The custom ActionResult

 using Microsoft.AspNet.Mvc;  
 using System.Text;  
 using System.Threading.Tasks;  
 namespace CustomActionResults  
 {  
   internal class StringWriterResult : ActionResult  
   {  
     private byte[] _stringAsByteArray;  
     public StringWriterResult(string stringToWrite)  
     {  
       _stringAsByteArray = Encoding.ASCII.GetBytes(stringToWrite);  
     }  
     public override Task ExecuteResultAsync(ActionContext context)  
     {  
       context.HttpContext.Response.StatusCode = 200;  
       return context.HttpContext.Response.Body.WriteAsync(_stringAsByteArray, 0, _stringAsByteArray.Length);  
     }  
   }  
 }  


String write ActionResult in action:

 using CustomActionResults;  
 using Microsoft.AspNet.Mvc;  
 namespace ActionHandlers.Controllers  
 {  
   public class ExampleController : Controller  
   {  
     public IActionResult Get()  
     {  
       return new StringWriterResult("Hello World!");  
     }  
   }  
 }  

Insert the code in some ASP.NET 5 project, and you should get Hello World!, when hitting ~/Example/Get.


Monday, April 27, 2015

Introducing Your Favorite Snippet Tool

Snippets in Visual Studio and SQL Server Management Studio are a great help, and tremendous time savers. Unfortunately the con about VS and SSMS snippets are: They are tedious to create. I have a feeling, that makes it less appealing to use custom snippets, because it includes working with XML to create them, and VS or SSMS offers no help.

History

Well, a couple of days ago, I decided to make a snippet of some SQL, which I had realized, I had to write regularly in the future. I was pretty tired of writing this SQL, and then I remembered: To create a snippet you have to setup an XML document. Then I got really tired. In hope of an easy solution, I did search the web for an online snippet creator. All i did find was tools, which had to be downloaded. No offence, these downloadables are properly mighty fine, but nowadays I think a tool like a snippet creator should be online. Easy to reach, and not another downloaded tool to soil your computer.

Priorities can be strange some times, and I decided, that I would rather write an online tool myself, which could make snippets, than do another handmade snippet.

So here after a small coding marathon, I'll present to you:

YOUR FAVORITE SNIPPET TOOL(that is the name)



Enjoy!

FAQ

Q: Why is the link www.snippettool.net and not www.yourfavoritesnippettool.com when the tool name is Your Favorite Snippet Tool?
A: For your convenience. It is much easier, to remember www.snippettool.net and type it right. 

Q: The first release is version 0.8.0, is it production ready?
A: Yes. There is some extended features for VB, which will be there in a later release. Further I have some ideas for UI improvements. Also, until I have had some more feedback, it wouldn't be right to release it in version 1.0.0

Q: VSI packages are supported, what about VSIX packages?
A: VSIX does not by default, support snippets. Hacks must be applied to make VSIX work with snippets.

Q:The Visual Studio Content Installer does not install the VSI package to Visual Studio 2xxx, why?
A: That is because, Visual Studio Content Installer is a strange piece of software. Specially, if you have more versions of Visual Studio on your machine.

Q: I found a bug, what to do?
A: I will appreciate, if you would write to me about it. Contact information is at the button of the page. 


Saturday, April 11, 2015

SSIS: An easy SCD optimization for dev and prod

The value of reading this post, depends on how you work with SSIS and how database nursing are handled in within your organization.

The optimization is a single index, but if you only nurse indexes in prod, you could waste a great time when developing SCDs in SSIS. The method is simple, when you now the nature of your SCD, then you can create an index right away, and reduce your development waiting time. Specially if you are testing with bigger volumes of data.

Let me show you

Let's say you have following table definitions, and you working in a SSIS project using Visual Studio:

-- Staging
CREATE TABLE Staging.Customers
(
CustomerId UNIQUEIDENTIFIER,
FistName NVARCHAR(200),
MiddleInitials NVARCHAR(200),
LastName NVARCHAR(200),
AccountId INT,
CreationDate DATETIME2
)
GO 

-- Dimension
CREATE TABLE dbo.dimCustomers
(
CustomerDwhKey INT IDENTITY(1,1),
[Current] BIT, 
CustomerId UNIQUEIDENTIFIER,
FistName NVARCHAR(200),
MiddleInitials NVARCHAR(200),
LastName NVARCHAR(200),
AccountId INT,
CreationDate DATETIME2
CONSTRAINT PK_CustomerId PRIMARY KEY CLUSTERED (CustomerDwhKey)
)
GO


You have a Data Flow, where you transfer data from Staging.Customers to the dimension dbo.dimCustomers using the built-in component Slowly Changing Dimension:


In our example setup CustomerId will be a so-called Business key, And Current will be the indicator for which row are current. It should also be noted, it is possible to have more than one Business keys.


We'll configure attributes as:


Now, the Slowly Changing Dimension component work in following way:

For each entity it recieves, it will search the dimension table for an entity with the same business keys(in plural!!!) and is flagged as current, or in plain SQL:

SELECT  attribute[, attribute] FROM dimension_table WHERE current_flag = true AND business_key = input_business_key[, business_key = input_business_key]

Or as it will look like in our example

SELECT AccountId, CreationDate, FirstName, MiddleInititals, LastName FROM dbo.dimCustomers WHERE [Current] = 1 AND CustomerId = some_key

Further, in case we have an historical change, the current entity in the dimension must be expired by setting Current = 0.

UPDATE dbo.dimCustomers SET [Current] = 0 WHERE [Current] = 1 AND CustomerId = some_key 

The solution

As you might have realize by now, we can improve performance tremendously by putting an index on the current flag and the business keys(again plural!!!). For each entity passing through the Slowly Changing Dimension component, the will be at least 1, but likely 2 searches in the dimension table. And the by knowing the business keys and current flag. and the nature of the Slowly Changing Dimension component, you can predict the index which will improve performance.

The index for our sample will be

CREATE NONCLUSTERED INDEX IX_Current_CustomerId ON dbo.dimCustomers
(
[Current],
CustomerId --- Remember to include each business key
)

Should index be a filtered? I'll let that be up to you.

Indexes, bulk and loading of dimensions

Some tend to drop indexes when loading dimenson, with the argument: Bulk loading is fastest without indexes, which SQL Server has to maintain while loading. This argument has to be revised when working with the Slowly Changing Dimension component.

Because the component searches the dimensions so heavily, (in general) it will be faster loading with indexes than without. If there is no indexes, each entity going through the component, will require at least one table scan, which is quite expensive, and gets more expensive as your dimension grows. 

That's all

Monday, March 9, 2015

Wrestling the Azure Storage REST API - Part 2

This post is about authorization HTTP header, used when making requests to Azure Storage API. There is some dependencies the previous part of this series, specially regarding the x-ms-date header field.

Authorization

The authorization field is expressed in this way:

Authorization="[SharedKey|SharedKeyLite] <AccountName>:<Signature>"

The authorization supports 2 schemes for calculating signatures, Shared Key or Shared Key Lite. The scheme you are using for authorization must be stated with either SharedKey or SharedKeyLite as the first thing in the header field.

The difference between the schemes are, Shared Key Lite is backward compatible with earlier versions of the Azure Storage API. I can't remember to have seen any example with Shared Key, I guess it is because it requires more effort to make it work.

Important!!! When using dates in authorization, these dates must be the same as x-ms-date, or else the authorisation will fail.

Shared key

Blob, Queue and File Storage signature is calculated one way, while Table Storage signature is calculated in another way. 

Blob, Queue and File Storage:

StringToSign = 
VERB + "\n" +
Content-Encoding + "\n" +
Content-Language + "\n" +
Content-Length + "\n" +
Content-MD5 + "\n" +
Content-Type + "\n" +
Date + "\n" +
If-Modified-Since + "\n" +
If-Match + "\n" +
If-None-Match + "\n" +
If-Unmodified-Since + "\n" +
Range + "\n" +
CanonicalizedHeaders +
CanonicalizedResource;

Table Storage:

StringToSign = 
VERB + "\n" +
Content-MD5 + "\n" +
Content-Type + "\n" +
Date + "\n" + 
CanonicalizedResource;

Shared key Lite

Like Shared Keys, there is a difference in calculating the keys depending on what kind of storage is used:

Blob, Queue and File Storage:

StringToSign =
VERB + "\n" +
Content-MD5 + "\n" +
Content-Type + "\n" +
Date + "\n" +
CanonicalizedHeaders +
CanonicalizedResource;

Table Storage:

StringToSign = 
Date + "\n" +
CanonicalizedResource

When comparing the 2 schemes, it begins to make sense why most chose to use Shared Key Lite

Which parameter must be filled, depends heavily on context. E.g. Date, while it must be set in every Table request, there is some Blob requests where it must not be set.

Canonicalized Headers

Just take all the header starting with x-ms-, sort them and concatenate them separated by \n.

Exmaple(taken from the Azure Storage documentation):

 x-ms-date:Sun, 20 Sep 2009 20:36:40 GMT\nx-ms-meta-m1:v1\nx-ms-meta-m2:v2\n

Canonicalized Resources

Canonicalized resources is form the following way:

Canonicalized resource = /account/resource

Example:

For this request

GET https://myaccount.table.core.windows.net/Tables HTTP/1.1

The canonicalized resource will be /myaccount/Tables

Query parameters must not be included. Unless you make following request(taken from documentation):

GET https://<account-name>.table.core.windows.net/?restype=service&comp=properties HTTP/1.1

Here the canonicalized resource will be /myaccount/?comp=properties

Calculating the signature

Here is the Azure Storage REST API documentation pretty weak. 2 things it misses are, when using HMAC you need to supply a key and a message. In context of making requests to the Azure Storage REST API, the key is either the Primary og Secondary key, which can be obtain from the Azure portal. The message is the StringToSign defined earlier in this post.

Also, the Primary and Secondary key which is found on the Azure portal are base64 encoded, you need to decode them, in order to be able to use them.

So what the documentation states as

Signature=Base64(HMAC-SHA256(UTF8(StringToSign))) 

Is in reality


Signature=Base64(HMAC-SHA256(UTF8(Debase64(key)),UTF8(StringToSign))) 

Where key is either the Primary or Secondary key.

And this is all for Authorization.

Wrestling the Azure Storage REST API - Part 1

Motivation

With Azure SDKs for a wide variety of programming languages, why should anybody want to learn about the Azure Storage REST API? 

Maybe there is no SDK for your favourite language, which was my case. Maybe the official SDK do not support the latest API version, which could mean it is not possible to communicate with JSON in Table Storage. Maybe you are just courios.

This blog post is based on my work on GoHaveAzureStorage, and hopefully you will also gain from challenges I have had the Azure Storage.

Request break down 

A REST call looks like this:

GET https://myaccount.table.core.windows.net/Tables HTTP/1.1

This request is used to get all tables for an storage account. There is 2 mandatory http header fields, and an additional optional which I recommend, which you must send for every request to make it work. They are 
  • x-ms-date       - time for the request.
  • x-ms-version  - Which API version is the request targeting
  • authorization  - Which is a security digest
The first 2 header will be explained in this post, while the Authorization will be explained in part 2.

The URL

First the easy part. It is possible to use either HTTP or HTTPS, else it is more or less straight forward.

The x-ms-date header field

This field is used by Azure for validation and authorization. A valid request must be maximum 15 minutes old, and it must not be dated in the future. It can be expressed as:

current time =< x-ms-date < current time - 15

Pro tip

As it is close to impossible to be complete time synchronized with Azure, it is recommend to substract a few minutes from current time, when sending request.

One important last ting, Azure only understands time in RFC1123 format and GMT +0

If you have: Thu, 12 Feb 2015 21:16:45 UTC in a +1 time zone
It must converted to: Thu, 12 Feb 2015 20:16:45 GMT

The x-ms-version header field (Pro tip)

This an optional field, but you should prefer to set it, or else you will hit an earlier version of the Azure Storage API. you might experience challenges with JSON in table storage or with Shared Access Keys if not using the latest version.

The versions are defined as date, The date which the API version is released. I'm not sure wether this a good solution, because I find dates hard to remember after a while compared to versions. So I have to look up once in a while here: https://msdn.microsoft.com/en-us/library/azure/dd894041.aspx



Wednesday, February 11, 2015

Introducing GoHaveAzureStorage

Motivation

In a private project, I wanted to reach Azure Table Storage from some Go applications. My only issue was, I couldn't find a proper Go Azure Table Storage library. So when I got hold on the Azure Table Storage REST API, I decided to do a Go Azure Table Storage lib, and then I decided to do a full Azure Storage lib.

GoHaveAzureStorage

I have used the Microsoft Azure Storage API documentation, as inspiration for the library. With this approach, I hope people who is experienced with programming against Azure will find GoHaveAzureStorage just as easy to use.

A small sample:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
package main

import (
 "fmt"
 "gohaveazurestorage"
)

//Either the primary or the secondary key found on Azure Portal
var key = "PrimaryOrSecondaryKey"

//Storage account name
var account = "Account"

func main() {
 // Create an instance of the lib
 goHaveAzureStorage := gohaveazurestorage.New(account, key)

 // From the lib instace, we can create multiple client instances
 tableStorage := goHaveAzureStorage.TableStorage()

 //Creating a table
 httpStatusCode := tableStorage.CreateTable("Table")
 if httpStatusCode != 201 {
  fmt.Println("Create table error")
 }
}

For documentation and progress of the project:
https://github.com/ChristianHenrikReich/gohaveazurestorage

Tuesday, January 20, 2015

Go: Import cycle not allowed

Level: 1 where 1 is noob and 5 is totally awesome
System: Go

Among the programming languages I use, Go is one of my favorites. Intentionally Go lacks features such as generics, inheritance and some others features, which people might see as standards in a programming language. Some might see this as weaknesses in the language, and some might see it as strengths. In the end, the purpose is the keep the language simple, and to build software with less complex code.

A thing which can be challenging in Go, is object parent-child relations, where a child knows it's parent. Like this code below tries, and fails with an 'Import cycle not allowed' error:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
package child

import "Parent"

type Child struct {
  parent *Parent
}

func (child *Child) PrintParentMessage() {
  child.parent.PrintMessage()
}

func NewChild(parent *Parent) *Child {
  return &Child{parent: parent }
}

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
package parent

import (
  "fmt"
  "child"
)

type Parent struct {
  message string
}

func (parent *Parent) PrintMessage() {
  fmt.Println(parent.message)
}

func (parent *Parent) CreateNewChild() *child.Child {
  return child.NewChild(parent)
}

func NewParent() *Parent {
  return &Parent{message: "Hello World"}
}

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
package main

import (
  "parent"
)

func main() {
  p := parent.NewParent()
  c := p.CreateNewChild()
  c.PrintParentMessage()
}

Cross-refering packages are not allowed in Go. The best thing would be, if Parent could keep track of the children. Then there would be no issues, and some might argue the code would be more clean. But the world is not perfect, and sometimes circumstances enforces design decisions like this.

To make things work, we can use the nice duck typing feature(interfaces) which Go supports. In general, it is good practice to use interfaces. It keeps the code de-couplet and flexible.

So if we make the interface IParent and use it, then everything works out:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
package child

type IParent interface {
  PrintMessage()
}

type Child struct {
  parent IParent
}

func (child *Child) PrintParentMessage() {
  child.parent.PrintMessage()
}

func NewChild(parent IParent) *Child {
  return &Child{parent: parent }
}

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
package parent

import (
  "fmt"
  "child"
)

type Parent struct {
  message string
}

func (parent *Parent) PrintMessage() {
  fmt.Println(parent.message)
}

func (parent *Parent) CreateNewChild() *child.Child {
  return child.NewChild(child.IParent(parent))
}

func NewParent() *Parent {
  return &Parent{message: "Hello World"}
}

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
package main

import (
  "parent"
)

func main() {
  p := parent.NewParent()
  c := p.CreateNewChild()
  c.PrintParentMessage()
}

Now 'Hello World' is written to output, as intended. Of course, the interface can be placed in a 3rd package. As always, it depends...