Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Help in Understanding Interfaces

Status
Not open for further replies.

bitseeker

Programmer
Nov 27, 2005
60
0
0
US
I'm learning about interfaces, and I found thread678-812597 to be useful, along with snippets from many other
posts. I've been having a hard time connecting the general statements about what interfaces are used for
with concrete examples (and there are plenty of both...). I think I may have figured something about this out
and was hoping to post that understanding here and get some feedback to fine tune it (or trash it and start
over!)

Here's the view I came up with.
----------------------
Interfaces are very useful in the DESIGN of an app, and the payoff comes in the CODING and the later the MAINTENANCE of the app.

Even if I'm coding an app myself, as a one-person team, there's a lot to remember. If I use interfaces to
describe general relationships between different areas of code, that allows me to work inside those areas of
code one at a time without having to remember (or more likely, constantly go check) what is going on in
some other area.

The thing that has thrown me off in the concrete examples of interfaces is that I could never see the "design
intent" behind them (though this gives me an inkling into the use of what "intention" means in "Foundations of
OOP Using .Net 2.0 Patterns" by Christian Gross").

So what I came to is that you have to, somewhere, define what the methods and attributes inside the
interface are. That is, if you specify a "save" method in an interface, you have to somewhere (such as in
in-line documentation), say, "the 'save' method is used to write objects to the database". So in a sense,
you're writing high level pseudo code (like in a "mock object" from thread678-971775 ) that expresses what
the interface is supposed to enforce (note, "enforce", not "do"). THAT's the "media" that connects the high
level design intent to the actual specification of an interface in code. THAT's what makes an interface mean
something, and how interfaces do their job of "locking" designs into the code. (I think in all the examples I've
seen these descriptions were assumed by context.)

So you could do a lot of design work using interfaces (or conceptual interfaces, before they are coded) describing the relationships between large areas of code (groups of classes, etc.) before starting to work on the details.

So in one sense, the "contract" is between the client and the serving classes, but in another sense the
"contract" is between the designer and the coder (aha, another inkling about what Gross is talking about in
book cited above).

So in a sense, the reason interfaces don't "do" very much in code is that they are mostly DESIGN tools, that
enforce behavior on the coding process. They're not supposed to make bits move, they're supposed to keep
programmers on track.

Then I can see there's another whole layer, which is basically programmer management, that's required to
get the programmer (which would include myself, in my one person project) to use the interfaces in a rigorous
way. So maybe interfaces are "programming process management" tools as well as design tools. That would be the third meaning of the "contract" aspect of interfaces.

Based on this understanding, hopefully the other interesting things about interfaces (which I don't claim to
understand well yet) will start to make more sense.
------------------------------------------------

So, that's what I think I know about interfaces. Any feedback would be appreciated.

Thanks!
 
<Changing the implementation isn't much of a problem, as long as it still does what it was intended to do.

Well, therein lies the problem. It often doesn't still do what it was intended to do. It's like saying "changing an interface isn't a problem, so long as you preserve backward compatibility."

I find your arguments to be a matter of preference, as I have with all of the black box/white box reuse arguments.

SundialServices, I find your comments both highly intelligent and amusing. I also find your "ripple" to be a phenomenon in either methodology of reuse. The "ripple" effect is simply in a different aspect of each. With interfaces, the ripple happens in objects' attempts to establish communication, with implementation inheritance, it happens in objects' attempts to make sense of one another after establishing communication.

Bob
 
BobRodes said:
It often doesn't still do what it was intended to do.
I guess it depends who you're working with and what their experience level is, but I remember when I was maintaining a huge base class and adding new functionality to it, I would add extra defaulted parameters to functions. I made sure that it continued to work as expected with old code and performed the new functionality if you pass in something other than the default parameters.
When a developer did change an interface without telling others about it, they'd certainly hear the screams of people getting compile errors on the new build... ;)
 
I'd much rather hear those screams than those of a million customers encountering a subtle fragile base class problem. ;-)
 
I'm with Bob. If you're making a major change in an existing code base (like changing an interface or a abstract base class), there comes a point where you want to break stuff. It gives you an opportunity to refactor and identify all the systems which need attention.

Which is not to say it's an opportunity to throw everything that's been on the feature wish-list into the app at that time. You still have to respect the priority of the work-items, as there's probably a good reason why someone's pet feature was buried at the bottom of the list.

Chip H.


____________________________________________________________________
If you want to get the best response to a question, please read FAQ222-2244 first
 
Oh believe me, I would have loved to completely re-write that base class from scratch, since it wasn't designed very well in the first place; but try telling the Product Managers you want to delay the next version by a week, let alone a few months! :-D
I think it's the Product Managers that lead to buggy software in their ever more ridiculous development schedules...
 
<I think it's the Product Managers that lead to buggy software in their ever more ridiculous development schedules...

It's also possible that upper management is to blame by creating bonus structures that inherently foster internal competition. If, for example, Product Managers get rewarded for exceeding rollout dates, and Process Managers get rewarded for reliability, you create a lack of consensus on goals. If both were rewarded for both, they would be more liable to work together.

So, really, it's a matter of evaluating situations on a case by case basis and making what changes are necessary to promote the harmony of the whole.

Bob

 
<I think it's the Product Managers that lead to buggy software in their ever more ridiculous development schedules...

It's also possible that upper management is to blame by creating bonus structures that inherently foster internal competition. If, for example, Product Managers get rewarded for exceeding rollout dates, and Process Managers get rewarded for reliability, you create a lack of consensus on goals. If both were rewarded for both, they would be more liable to work together.

So, really, it's a matter of evaluating situations on a case by case basis and making what changes are necessary to promote the harmony of the whole.

<I would add extra defaulted parameters to functions.
In the COM world, this is illegal. Meaning that the problems that inhere in changing interfaces are well known, and there are established ways of dealing with them in any black box reuse methodology. I'll stick to the way COM would handle this, since it's what I know. If you find that you need to add a parameter to a method, you would create a new interface, and allow new clients (or retrofitted old clients) to implement the new interface. Of course, that leads to the idea that a newer client might encounter an older server that didn't support the new interface, so that needs to be handled as well.

COM handles all this by requiring that all COM-compliant interfaces implement the iUnknown interface, which exposes, among other methods, the QueryInterface method. This method returns the methods that the interface supports. Clients can call this method and find out if a particular method is available before calling it.

In other words, this is another way to add parameters to a method without compromising interface integrity. Furthermore, since it doesn't go mucking about in the original implementation, one could argue (I don't necessarily, but it is often said) that it is a more stable methodology on a large scale than the one you espouse, albeit one that requires more overhead.

Bob

 
Well then I guess it's a good thing we weren't using COM. ;-)
I wanted to learn COM programming, but since .NET it's hard to even find any COM books anymore... :-(

The classes I was maintaining were for a QA test harness to test an API the development team was writing, so using COM in that situation probably wouldn't matter much anyways.
 
<Well then I guess it's a good thing we weren't using COM.

I don't see why. By requiring interface immutability, COM solves all the difficulties with interfaces that you set forth. So, you were sitting around breaking interfaces, and saying gee, a good thing we aren't using COM? ;-)

All kidding aside, I'm only mentioning COM to illustrate my main point: "the problems that inhere in changing interfaces are well known." To be sure, any black box reuse paradigm should include interface immutability, which you have already implied in your previous posts.

Bob
 
You know, I wish that in the real world we really had time and tools for all this "refactoring." And I wish that we could create regression-test cases for everything that a real-world production system has to do.

But real-world systems rarely operate that way, and even if they started out that way, I've consistently noticed that they do not continue. Over time, the tight dependency that is naturally created by these bright new object/inheritance oriented methodologies becomes decidedly .. and deliberately .. loosened. Code starts to be duplicated. Sometimes, massively so.

Why? Because two parallel lines of code, belonging to different apps in the same company and doing the same thing, but nonetheless written and maintained separately, can be reliably worked-on independently. When you are knee-deep in the hoopla on the accounting system, you are not simultaneously breaking the HR system.
 
<even if they started out that way, I've consistently noticed that they do not continue.

Cynic.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top